newb corner

117 views
Skip to first unread message

Alex Moore - Niemi

unread,
Dec 13, 2015, 1:08:15 PM12/13/15
to Type Theory Study Group
Hi all.

I'm a working programmer (at Zipcar, for what it's worth) but very curious about type theory. Mainly I've come to it via an interest in functional programming which--don't tell anyone--I suspect might be better than other programming paradigms.

I've not yet had the chance to pursue academic study in CS (I'd like to, though it's a little complicated life-wise right now). So I find some of the terms and format of the chosen book unfamiliar. After some discussion in the IRC channel, I figured I'd make a "newb corner" where folks at the shallow end can ask questions or make observations. There's also some talk of extra sessions to do exercises, and we could schedule those here.

I tend to think an implicit prereq for this group is having implemented an interpreter for a language, and I may try to do that soon. If anyone is interested in doing likewise with me as an exercise, let me know.

In any case, my first newb question actually harkens back to chapter 2. (I joined the group a bit late, so I have to go back a bit to catch up.) Given the examples for judgment forms (2.2a ..) my intuition at the moment is types are judgment forms by another name. Is this true?

Finally, I'm trying to follow along in a more natural format to me, which is programming some of the exercises or examples directly in a language. Here's me trying to put together the natural number judgment in F# (a lang I am new to). I couldn't figure out how to effectively wrap Z and S in a NatNum type though. Any other implementations or recommendations on that would be helpful as well.

type Z = Z with
    static member (!!) Z = 0

type S<'a> = S of 'a with
    static member inline (!!)  (S a) = !!a + 1

// type NatNum

// inline allows polymorphism/generics
let inline asInt x = !! x

let one = S(Z)
printfn "one: %A" !! one
let two = S(one)
printfn "two: %A" !! two



Best,
Alex


Jon Sterling

unread,
Dec 13, 2015, 1:31:31 PM12/13/15
to type-theory...@googlegroups.com
Hi Alex,

Your question "are types just judgments?" is a very good one. The answer
is "no and yes, but mostly no". Allow me to elaborate...

A judgment is simply a mathematical statement or assertion, and we say
that a judgment is "evident" when we have constructed the evidence (or
proof) of it (well, to be fair, the notion of a judgment is a bit more
elaborate than this, but there's no point in me getting into it here,
particularly in the context of bob's book). On the other hand, a type is
a mathematical object about which certain judgments can be made. For
instance, for an object [A], we can make the judgment [A type], which
means that [A] is a type; for a type [A] (i.e. an object [A] for which
[A type] obtains), we can make the judgment [A true], which means that
the type [A] is inhabited.

There are numerous other judgments that can be made concerning types as
well, depending on the particular type theory you're working with; for
example, in modal type theory we have [A valid], and in bidirectional
type theory we have [A verif] and [A use], and in classical type theory
we have [A false], and in polarized classical type theory we may have [A
trivial] and [A absurd] in addition to [A true] and [A false]. There's
no limit to the amount of interesting judgments that can be made about
types.

All this is to say, judgments come before types; another way to say it
is, a theory of types is given by a collection of judgments of a certain
kind. Type theory is an application of the "judgmental method".

Now, you may be wondering what the "yes" part of my initial remark was.
I was intentionally vague in my answer about the precise definition of
judgments and their evidence, because this is intended to be
"open-ended"; that is, you may get interesting and different behavior
depending on how you interpret these notions (for instance, in some
interpretations, "evidence" must always be a finitary object, but in
others, this is not the case). But if you want to formalize the notion
of a judgment and its evidence, however, you often end up with something
that looks a lot like a type theory, where each judgment is a type and
the evidence for the judgment is the inhabitants of that type; this
methodology is called "judgments as types", where you take an existing
type theory and use its types to code up the judgments of another
theory. This is the idea that underlies things like the Edinburgh
Logical Framework, which is used in Twelf, Abella and Beluga.

I hope that this helps! Feel free also to ask follow-up questions.

Best,
Jon



On Sun, Dec 13, 2015, at 10:08 AM, Alex Moore - Niemi wrote:
> Hi all.
>
> I'm a working programmer (at Zipcar, for what it's worth) but very
> curious
> about type theory. Mainly I've come to it via an interest in functional
> programming which--don't tell anyone--I suspect might be better than
> other
> programming paradigms.
>
> I've not yet had the chance to pursue academic study in CS (I'd like to,
> though it's a little complicated life-wise right now). So I find some of
> the terms and format of the chosen book unfamiliar. After some discussion
> in the IRC channel, I figured I'd make a "newb corner" where folks at the
> shallow end can ask questions or make observations. There's also some
> talk
> of extra sessions to do exercises, and we could schedule those here.
>
> I tend to think an implicit prereq for this group is having implemented
> an
> interpreter for a language, and I may try to do that soon. If anyone is
> interested in doing likewise with me as an exercise, let me know.
>
> In any case, my first newb question actually harkens back to chapter 2.
> (I
> joined the group a bit late, so I have to go back a bit to catch up.)
> Given
> the examples for judgment forms (2.2a ..) my intuition at the moment is
> types *are* judgment forms by another name. Is this true?
> --
> You received this message because you are subscribed to the Google Groups
> "Type Theory Study Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to type-theory-study...@googlegroups.com.
> To post to this group, send email to
> type-theory...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/type-theory-study-group/07c81e05-e5cf-4b9a-bdda-1b5d81a38a20%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

Alex Moore - Niemi

unread,
Dec 13, 2015, 1:44:51 PM12/13/15
to Type Theory Study Group

Thanks for your quick reply Jon! I think actually you misread me — I didn’t ask if types are just judgments, I asked if types are just judgment forms. I’m talking specifically about (from the very intro to Ch. 2):


A judgment states that one or more abstract binding trees have a property or stand in some relation to one another. The property or relation itself is called a judgment form, and the judgment that an object or objects have that property or stand in that relation is said to be an instance of that judgment form. A judgment form is also called a predicate, and the objects constituting an instance are its subjects


Does this change your answer significantly? I was struck by judgment forms as types just given their inductive definitions are exactly analogous to types I’ve seen defined. For instance, 2.3a..b define the judgment form of “tree”, and here’s an example from Haskell: https://wiki.haskell.org/Algebraic_data_type#Tree_examples


In part I was also wondering this because of my understanding (shallow though it is) of refinement types, in that they add predicates to types. But ime, you can essentially define types as implicit predicates. Does that make sense? Then, what is the real value of refinement types?


Best,

Alex

Jon Sterling

unread,
Dec 13, 2015, 2:07:55 PM12/13/15
to type-theory...@googlegroups.com
Alex,

Thanks for the clarification; I think my answer stands without much
modification, but I can perhaps provide a bit more clarification in
relation to that statement in the book.

It may be easiest to read what bob calls a "judgment form" as a
"relation" on its subjects, rather than as a predicate (though I suppose
you could consider it a predicate on a product of ABTs). The definition
of a type is given by specifying a particular form of judgment, or
relation (predicatE), on terms; the inhabitants of that type are then
said to be the objects which are in that relation (predicate).

Now, Bob gives this judgment [t tree], for instance, and that sounds a
lot like the definition of a *type* of trees. You can think of the [t
tree] judgment form as the relation through which a *type* of trees may
be defined, but it is not itself a type.

Now, we can actually give a concrete example of how types and judgments
in general differ! It is the case that [empty tree] is an evident
judgment, but what about [(lam(x.x) empty) tree]? (i.e. applying the
identity function to the empty tree). That judgment is actually not
evident! (Because it is not in the inductive definition of the [t tree]
form of judgment)

As I noted above, the definition of a type consists in a particular kind
of judgment / relation / predicate; in most type theories, it is
required that this judgment shall respect computation: that is, if
[J(M)] and [M] evaluates to [M'], then we must have [J(M')]. So, whilst
the definition of a type is going to include some form of judgment, it
can't be just any judgment. So, type theory adds extra structure to the
judgmental apparatus.

---------------------------------------------------------------------------------------------------------------------------

Regarding type refinements and predicates, this is a very tricky
question because it depends a lot on the view you take of type theory,
and even what continent you live on. But the basic idea of why they add
anything at all is that in the type theories for most programming
languages, inhabitance in a type is a purely syntactic matter: that is,
the inhabitants of the types are defined inductively in what is
basically a grammar.

Type refinements let you impose *semantic* constraints on the values of
types. An example of a type refinement is the subset, {x : A | B(x)},
which contains just those terms of type A that satisfy the predicate B.
Another example is x,y:A // R(x,y), which contains the elements of A,
but equates any x,y:A such that R(x,y). A type refinement system of this
sort is pretty much never going to be decidable, so it makes sense to
start with a simple & decidable type system and layer on top of it a
system of type refinements which can be discharged using a logical
framework (or a solver, etc.).

(The kind of refinements I'm discussing above are called "behavioral
refinements". There's another kind called "datasort refinements", which
can be checked decidably; they are a lot less powerful, though.)

Nuprl's type theory is a nice example of a fully-fledged behavioral
refinement system, where the underlying "syntactic" type system is just
the trivial type system with only one type, and all the Nuprl "types"
are actually refinements on this one type.

Best,
Jon




On Sun, Dec 13, 2015, at 10:44 AM, Alex Moore - Niemi wrote:
>
>
> Thanks for your quick reply Jon! I think actually you misread me — I
> didn’t
> ask if types are just judgments, I asked if types are just judgment
> *forms.
> *I’m talking specifically about (from the very intro to Ch. 2):
>
>
> A judgment states that one or more abstract binding trees have a property
> > or stand in some relation to one another. The property or relation itself
> > is called a *judgment form*, and the judgment that an object or objects
> > have that property or stand in that relation is said to be an *instance *of
> > that judgment form. A judgment form is also called a *predicate*, and the
> > objects constituting an instance are its *subjects*.
> > > email to type-theory-study...@googlegroups.com
> > <javascript:>.
> > > To post to this group, send email to
> > > type-theory...@googlegroups.com <javascript:>.
> > > To view this discussion on the web visit
> > >
> > https://groups.google.com/d/msgid/type-theory-study-group/07c81e05-e5cf-4b9a-bdda-1b5d81a38a20%40googlegroups.com.
> >
> > > For more options, visit https://groups.google.com/d/optout.
> >
>
> --
> You received this message because you are subscribed to the Google Groups
> "Type Theory Study Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to type-theory-study...@googlegroups.com.
> To post to this group, send email to
> type-theory...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/type-theory-study-group/523e17a1-5756-446f-9cc3-e6f529dc7293%40googlegroups.com.

Joseph Abrahamson

unread,
Dec 13, 2015, 2:29:25 PM12/13/15
to Jon Sterling, type-theory...@googlegroups.com
I’d like to share a sort of philosophical perspective that might be convenient for putting all of these things into proper place. First let me disclaim that this isn’t something I’m presenting as a valid philosophy itself, more just a mental tool.

So let’s talk about what this entire endeavor really is. By "entire endeavor” I actually mean something like “what are we doing when we do mathematics”, but I’ll specialize that argument a lot to the CS perspective you’re interested in, Alex.

So here’s a way of looking at it:

1. We want to study the properties of some “objects” which exist outside of our study. In particular, languages, fragments or utterances spoken in a particular language, types—all of these things can be assumed to just exist de facto. We want to study them.

2. To study these objects—again, things outside of us—we introduce an idea that we can control called a judgement. It’s a demarcation of “what we can know”. To judge [A type] is to come to know that “A”, whatever it is, is a type, whatever that is.

There’s something weird going on here, though, in that we’ve begun to write down things which we hope parallel what’s going on in our head. [A type] is just 2 words in order, but we want to construct rules so that it reflects the meaning going on inside our own head.

3. So we introduce the structures of judgements. This is what Ch3 was about: what makes a judgement make sense in use. We believe they must follow certain rules and some time was spent talking about, e.g., two different ways that the notion of “hypothetical judgement” could possibly make sense. This is weird! In essence, given that we have to write down what “meaning” means we’re having to make tough tradeoffs in what we accept as valid. This is in some sense a personal choice but we can also set aside our personal beliefs just to take some system of judging as OK—at least for purpose of study at the moment.

4. So now that we have a way of writing down what it means for “something" to "mean something to us” we have to have a way of writing down these “somethings”. Languages, language utterances, and types exist exogenously—but we have to have a system to note our learnings and execute proofs (essentially so that we can convince other people of what we believe!) and to do that we need a way of formalizing languages, language fragments, and types.

This is what ABTs are doing. They’re a language for capturing a structure which we sort of come to believe is rich enough to “get the job done”. Again there are a huge set of tradeoffs in this choice! But, PFPL makes the assertion that ABTs are “good enough” to capture what we want to talk about in languages, language utterances, and types and “convenient enough” to not have us be bogged down in detail.

5. So now we have a few “systems of thought” in place and understand some of the rules of them. We get some nice principles like rule induction which we’ll use in the future to execute “proofs” which allow us to go from one set of judgements (again, things that we know) to others. This essentially is a learning process (or, perhaps, a communication process where I write down a proof and send it to you and then you come to know something).

6. Chapter 4 began by introducing a profoundly boring language, but, importantly, it did so within this logical framework we’ve devised. The language is a set of allowable ABTs, the utterances are concrete ABTs themselves, as are the types. Within this framework we now start the real work in beginning to understand our types. For instance, we write down a proof that each utterance in the language "has" a unique type and, furthermore, that given any utterance and knowledge that is “has” some type that we can now analyze that utterance by listing out all possible utterances which could take that type.

This is where we begin to care about the study of type theory.

We’ve started to take aim at the relationship between a language’s values and its types and asked why that relationship is important. We’ll soon introduce the dynamics of the language and ask how the types and the dynamics should related (and then discover the important type safety property).

——


So that was kind of long, but I felt like it was time to get that perspective out.

Are types judgements? In the “workaday” perspective above: no! We use judgements as a system to study types.

Now before I leave it like this, looking all tidy, I should throw the twist. What makes my above perspective not such a complete picture?

Well, for one, it makes this assumption of exogenous types and languages. If you buy that, then there’s a question as to why does my particular logical framework accurately capture this exogenous thing? If you don’t buy it, then perhaps we are actually creating types when we specify the rules which give rise to their judgements.

This also gets to Jon’s point. If what we’ve described so far is a system of learning then, well, we know at least as programmers that it’s hardly uncommon to want to introduce more types. We do it all the time as we discover interesting parts of the problem to study. Thus, we have to always treat our system of judgements as open ended so that new types and their rules can grow.

These sorts of questions might not seem too damning to the picture I’ve drawn, but it turns out that they shake the foundations and push on the particulars of tradeoffs made. Eventually we get to the idea that there really is no consensus today on the right way of doing this all. Fortunately, we also realize that this isn’t such a big problem at first because many, many basic ideas exist more or less invariant to how you answer some of these philosophical conundrums.

Joseph Abrahamson

unread,
Dec 13, 2015, 2:40:45 PM12/13/15
to Jon Sterling, type-theory...@googlegroups.com
Oh!

And if you want to implement a language interpreter I highly suggest it. You should write a trivial interpreter (dynamics) and type checker/inferer (statics). Ch4 gives you a language and type system and the rules of a inferer/checker (it’s called “Statics” after all) and Ch5 will give you the interpreter (dynamics).

To implement this language one will want an ABT library because this language uses variables! Thus, we have to talk about binding issues.

That said, one might want to get by without using an ABT library at first. That would be even simpler. I’d say that in a nice ML-ish language you can write a language, type, inferer/checker, and interpreted in under 20 lines—easy—for the following even-more-boring language

Typ t ::= 
    num        num        numbers
    bool        bool         booleans

Exp e ::=
    num[n]        n        numeral
    true        true        truth
    false        false        falsehood
    plus(e1, e2)        e1 + e2        addition
    eq0(e)        zero? e        equal to zero predicate

Note that I’m avoiding the use of variables. This makes the language extra-extra boring!

Here are the typing judgements

——————————
Gamma |- num[n] : num

——————————
Gamma |- true : bool

——————————
Gamma |- false : bool

Gamma |- e1 : num     Gamma |- e2 : num
————————————————————
Gamma |- plus(e1, e2) : num

Gamma |- e : num
——————————
Gamma |- eq0(e) : bool

Note that there’s no use of the “Gamma” at all. This is a property again of not using variables (so you’ll want to upgrade to a language which actually uses variables quickly!)

You may want to try to implement a naive checker/inferer for this language for now and then read Ch5 and see if you can figure out how to do an interpreter. When you do the checker/inferer I recommend you start with a top-level definition that’s a inferer, make it work somehow, look to see that it parallels the typing judgement rules I just listed, and then read Ch4 Ex1 and try to see if you can figure out the “analytic” and “synthetic” parts. I’ll go ahead and give you the Haskell types

     check :: Exp -> Typ -> Bool
     infer :: Exp -> Maybe Typ

where returns of “Nothing” and “False” indicate that the utterance does not type-check!

Alex Moore - Niemi

unread,
Dec 13, 2015, 7:42:25 PM12/13/15
to Type Theory Study Group
Now, we can actually give a concrete example of how types and judgments 
in general differ! It is the case that [empty tree] is an evident 
judgment, but what about [(lam(x.x) empty) tree]? (i.e. applying the 
identity function to the empty tree). That judgment is actually not 
evident! (Because it is not in the  inductive definition of the [t tree] 
form of judgment) 

Can this example be restated as the difference between types and judgment forms? Specifically in the text: "the following rules constitute an inductive definition of the judgment form tree". So "tree" is a judgment form, and codified as given by the rules. Then a tree type could be likewise expressed in a very straightforward translation of that rule system into an implementation of a type system, no?

Finally, an explanation of what we mean by "evident" would be helpful for me. I'm a bit confused since I thought the system is always open to extension, such that we could prove identity is equal to its object, and then the judgment would be derivable.

& Joseph, I appreciate your replies, but currently don't find the first v operational to me. (I imagine it will help others to contextualize though.) I think I'm fairly clear from a philosophical perspective that our epistemology is constructive to an instrumental purpose: we make a model so we can use it to do stuff. I'd appreciate if you restate some of the last of your first reply w/r/t judgment forms rather than judgments. For what it's worth, I hadn't made any connection between judgments themselves and types, so the continued discussion of it confuses me.

As to the 2nd reply, what does "Gamma |- " mean? Derivable from gamma? Why do we need to use it in the notation there? From the text, gamma "usually" means: "a finite collection of basic judgments". But what does that mean in the context where you're using it?

I def think I want to try to implement a static checker, then an interpreter. (Or perhaps the other way around, unsure.) Talking to my partner (a CS person) they recommended doing an interpreter in Racket. Any recommendations on that or a static checker?

Joseph Abrahamson

unread,
Dec 13, 2015, 9:01:24 PM12/13/15
to Alex Moore - Niemi, Type Theory Study Group
"Gamma |- ...“ means “…” is derivable in a context “Gamma”. Contexts are essentially (so far) mappings from variable names to types so that we know what types are allowed to be substituted in for those variables.

The static checker will be easier to do first given that it follows the order of the book, but doing either order is alright. If you do interpreter then static checker you’ll run into an interesting property, though: when one does not first type check it’s possible that the language fragment that you interpret will get “stuck” or throw an error. With your checker you can validate that such errors are impossible.

And that’s cool re: the first response. It’s perspective that I’ve found helpful, but maybe not always the most important one.

“Judgement form” is just terminology which lets us distinguish between [A type] (a judgement) and “all judgements which determine some ABT is a type” (a judgement form). So for some judgement form F, some judgement J might be an instance of that form.

It’s not such a tecnically important distinction except to note that as we describe judgements we’ll pick interesting forms to work on often one-by-one (although sometimes simultaneous form definition is necessary).

--
You received this message because you are subscribed to the Google Groups "Type Theory Study Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to type-theory-study...@googlegroups.com.
To post to this group, send email to type-theory...@googlegroups.com.

Alex Moore - Niemi

unread,
Dec 13, 2015, 10:38:47 PM12/13/15
to Type Theory Study Group, bigba...@gmail.com
I'd actually recommend lifting that first reply into a blog post or separate posting on here, as it is valuable, but just doesn't address any of my immediate questions. One thing that is tough to remember about dealing with newbs is the threat of information overload.

At the moment, I am plodding through chap 3.

I think I can understand derivability okay now, although I'm not sure I know what expansion means in this context: "from the expansion R{J1,...,Jk }" Likewise I wouldn't know how to restate: " G for the expansion of R with an axiom corresponding to each judgment in G" Derivability seems weird to me only because we (seemingly by fiat?) treat the hypotheses/antecedents as "temporary" axioms. Though I imagine we do that in the same way we do mathematical induction? We assume an earlier case then prove a later? (Or vice versa.) Is derivability the property of a thing that allows me to induct on it, essentially?

To confirm my reading, translating 3.2 into the more general statement above 3.1: "a nat |-(2.2) succ(succ(a)) nat" a nat is gamma, |-(2.2) is |-(R), and succ(succ(a)) nat is K, yes?

Admissibility I don't get the use of yet. "the conclusion J is derivable from rules R whenever the assumptions G are all derivable from rules R" So G are the assumptions for the conclusion J, and I can get G from R, so transitively I can get J from R?

More notes as I have time to read more. Thanks again Joseph and Jon for your help. :)

Joseph Abrahamson

unread,
Dec 13, 2015, 11:11:33 PM12/13/15
to Alex Moore - Niemi, Type Theory Study Group
- “expansion” there just emphasizes that we’re temporarily expanding the set of allowed rules from the core rule set R to R alongside all of the hypotheized judgements. E.g., to prove J1, J2, J3 |- X in R we show a derivation of X in the ruleset R + {J1, J2, J3}. It’s not really a technical term.

- Derivability of hypothetical judgements means exactly we can derive the conclusion under the (temporary) assumption of the antecedents. We’ll later see induction formalized as a particular use of a hypothetical judgement—so yes, you’re right on the money!

- That said, it’s not that derivability lets you induct exactly. There are many ways we’ll be inducting in time, at least, so one has to be careful. One can "induct on derivations” by noting that derivations themselves are just data structures. If they’re derivations of a judgement of a particular form then we might know all of the shapes they could possibly take and therefore execute a proof across all such derivations via derivation induction.

On the other hand, with hypothetical judgements in hand we have the technology to talk about hypothetical concepts like induction which, as you’ve noted, uses a hypothetical statement itself.

- Your interprtation of 3.2 is spot on.

- Admissibility occurs more or less how you say. Gamma |= J in R if as soon as I tell you that |- Gamma you can somehow show that |- J. This is more open-ended than derivability because youre allowed to go in and inspect any proofs of |- Gamma and pull out pieces. Derivability is much more rigid requiring careful application of the rules (and the hypotheses) to build a derivation tree.

The example given is nice. [succ(a) even |- a odd] seems plausible on reading, but we cannot derive it because rules (2.8) only give us that if [succ(a) even |- succ(succ(a)) odd]. In some sense, our rules are biased forward” and derivability isn’t strong enough to jump over that bias. On the other hand, [succ(a) even |= a odd] is *admissible* because we know that any inference landing on [succ(a) even] must have had [a odd] as it’s previous step (because rule 2.8b is the only way to derive [succ(a) even] for any a) and thus we can "steal" that judgement to admit our conclusion.

Note that to achieve admissibility we examined not only the immediate structure of the judgement but also made an argument about all possible ways to derive that judgement. This is where the extra power comes from.

--
You received this message because you are subscribed to the Google Groups "Type Theory Study Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to type-theory-study...@googlegroups.com.
To post to this group, send email to type-theory...@googlegroups.com.

Alex Moore - Niemi

unread,
Dec 14, 2015, 10:51:10 AM12/14/15
to Type Theory Study Group, bigba...@gmail.com
Also, out of my curiosity I posted on Stack Overflow about trying to use inductive types in F#. If folks have other language examples I'd love to see them too.

Matt Oliveri

unread,
Dec 14, 2015, 5:53:28 PM12/14/15
to Type Theory Study Group, bigba...@gmail.com
Anyone who's very eager to try coding up some stuff from PFPL should consider learning a tool that's actually *good* for defining inductive relations. Here's even and odd in Coq:
Inductive Even : nat->Prop :=
  evenO : Even O |
  evenS n : Odd n->Even (S n)
with Odd : nat->Prop :=
  oddS n : Even n->Odd (S n).


This uses the nat type in the standard prelude:
Inductive nat : Set :=
  O | (* zero *)
  S (n:nat). (* successor *)


So note that these Even and Odd relations are still not exactly the same as the book's even and odd, which are relations on ABTs, constraining them to represent nats, not relations on a specific type of nats.

Alex Moore - Niemi

unread,
Dec 14, 2015, 9:05:51 PM12/14/15
to Type Theory Study Group, bigba...@gmail.com
Hi Matt -- thanks for the example! Unfortunately, the resources for someone outside of academia to work with Coq are not super newb friendly last I looked. (Though that was awhile ago.) If you have recommended resources or would lead a session to get up to speed on Coq, that'd be great.

FWIW, I find it very instructive to see the limits of "workaday" languages, too, so comparing a straight-forward implementation like yours to contortions done in other languages still has value to me at least.

Matt Oliveri

unread,
Dec 14, 2015, 10:51:04 PM12/14/15
to Type Theory Study Group, bigba...@gmail.com
I think "Software Foundations" is a newb-friendly intro to Coq, but I can't be sure. ;)
http://www.cis.upenn.edu/~bcpierce/sf/current/

I don't actually recommend trying to learn Coq and get coding right away. PFPL has been using an informal metalanguage to study type systems, which is traditional. Learning it that way worked for me. Coq also may not be the best choice of formal metalanguage for the material of PFPL. (But it's better than F#.)

Also, just in case there's some confusion: I wouldn't say one "implements" an inductive relation or judgement in general. It's a mathematical definition. Algorithmic judgements correspond to some algorithm that can be implemented, but in general judgements are not algorithmic. In particular, those Coq definitions of Even and Odd were not implementations. As it happens, Even and Odd are algorithmic, so they correspond to mutually-recursive functions that check whether a nat is even/odd, but the functions and the relations are not the same thing.

Joseph Abrahamson

unread,
Dec 14, 2015, 11:19:37 PM12/14/15
to Matt Oliveri, Type Theory Study Group, bigba...@gmail.com
I agree with Matt here. Semiformal or “informal” arguments will be much nicers to do with PFPL for a while. You should feel free to try formalizing a few things in a programming language (by writing interpreters and type checkers) or a proof system, but you will have a much, much bigger challenge in front of you if you try to get through PFPL formalizing *everything* in something like Coq.

Also Adam’s Certified Programming with Dependent Types is more newb friendly, I think. :)

--
You received this message because you are subscribed to the Google Groups "Type Theory Study Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to type-theory-study...@googlegroups.com.
To post to this group, send email to type-theory...@googlegroups.com.

Matt Oliveri

unread,
Dec 14, 2015, 11:50:19 PM12/14/15
to Type Theory Study Group, bigba...@gmail.com
On Monday, December 14, 2015 at 11:19:37 PM UTC-5, Joseph Abrahamson wrote:
Also Adam’s Certified Programming with Dependent Types is more newb friendly, I think. :)

Awesome book. I just thought it might be too fast-paced and cover too-advanced material. But maybe not.

Joseph Abrahamson

unread,
Dec 15, 2015, 8:52:54 AM12/15/15
to Matt Oliveri, Type Theory Study Group, bigba...@gmail.com
I think Adam’s “go right for the Sledgehammer” approach is nice if you can survive not completely understanding what’s going on at first. You get to more interesting material quickly then can backtrack to dig in.

--
You received this message because you are subscribed to the Google Groups "Type Theory Study Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to type-theory-study...@googlegroups.com.
To post to this group, send email to type-theory...@googlegroups.com.

Alex Moore - Niemi

unread,
Dec 16, 2015, 9:09:58 PM12/16/15
to Type Theory Study Group, atm...@gmail.com, bigba...@gmail.com
Well, the level of formality doesn't matter to me much. (Unless we mean by that directly "implementation".) It's more I learn best from repeated exposure, or "in media res" to some extent. Programs are generally easiest for me to grok, but if it's better to just do exercises some other way -- fine by me.

I've got Proof General running in Spacemacs (just a fun yak-shaving) and am looking through Adam's book. I like it so far. I met him once here in Boston, but found him not super friendly at that time. His UrWeb hadn't been picked up by many software peeps yet, and I was disappointed to learn from him he was only targeting smaller start ups.

One thing I did find accidentally helpful was this: http://logitext.mit.edu/tutorial It's how I realized sum types and product types are just logical operators on types! I am sure such is obvious to y'all, but it'd never occurred to me before.


On Monday, December 14, 2015 at 11:19:37 PM UTC-5, Joseph Abrahamson wrote:
To unsubscribe from this group and stop receiving emails from it, send an email to type-theory-study-group+unsub...@googlegroups.com.

Alex Moore - Niemi

unread,
Dec 16, 2015, 9:33:41 PM12/16/15
to Type Theory Study Group, atm...@gmail.com, bigba...@gmail.com
Oh, I did have another question: what is the difference between a proposition and a predicate?

Matt Oliveri

unread,
Dec 16, 2015, 11:00:51 PM12/16/15
to Type Theory Study Group, bigba...@gmail.com
A predicate is a propositional function: a function that "returns" a proposition.

Definition big (n:nat) := n > 1000. (* Predicate on nat *)

Check big.
big
     : nat -> Prop

Unlike a boolean in a programming language, a proposition may not be computable to either true or false. (Though (big n) always is.) But in classical logic (with law of excluded middle), "proposition" and "boolean" are often used interchangeably, because there, any proposition is provably equal to either true or false, even if you can't tell which.

Alex Moore - Niemi

unread,
Dec 21, 2015, 12:20:44 PM12/21/15
to Type Theory Study Group, bigba...@gmail.com
Matt: That was the clearest, best explanation I've gotten so far. Thank you! When things can't be judged true or false, what type encompasses the third value? https://en.wikipedia.org/wiki/Bottom_type ? Is it wrong to even say "third" value?

Meanwhile I've started writing a type checker based on 4.2 in PFPL which I've got up here: https://github.com/mooreniemi/pfpl My partner, who is much more versed in Haskell than I, and has taken a PL course, is helping me through it. I've been using Spacemacs which I also used to play with Proof General in Coq. (I put that on pause to try to continue with this first.)

Right now I'm sort of dodging the ABT understanding by not doing anything with the concrete syntax or parsing at all right yet.

Alex

Joseph Abrahamson

unread,
Dec 21, 2015, 12:32:09 PM12/21/15
to Alex Moore - Niemi, Type Theory Study Group
This is an interesting point about propositions and judgements: there is no third value. We start out in the state of "not knowing" and then move away from it by constructing a judgement (such as "A is true"). Sometimes we can show that we can know, at least in principle, a judgement for all arguments, but for many interesting judgements we actually do not know if a judgement of true or false can ever be determined. This is how you represent an "unsolved problem".
--
You received this message because you are subscribed to the Google Groups "Type Theory Study Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to type-theory-study...@googlegroups.com.

To post to this group, send email to type-theory...@googlegroups.com.

Alex Moore - Niemi

unread,
Dec 21, 2015, 6:02:01 PM12/21/15
to Type Theory Study Group, bigba...@gmail.com
Hmm, unsolved problem has an analogy in computation though right? Lack of termination? This is what I always thought bottom type meant, though I don't really know for sure.

Also I found this a little hard to follow:

Lemma 4.2 (Inversion for Typing). Suppose that G ` e : t. If e = plus(e1; e2), then t = num, G ` e1 : num, and G ` e2 : num, and similarly for the other constructs of the language. 

The context above appears to define inversion in this context as a way of pivoting: "[...] syntax-directed [...] [makes it] easy to give necessary conditions for typing an expression that invert the sufficient conditions expressed by the corresponding typing rule". Via the syntax direction, we can get get the necessary conditions from the sufficient?

Or in other words, this mapping is injective (I'm not clear if it's surjective or not), so it's easy for us to go backwards as well as forwards?

Alex

Jon Sterling

unread,
Dec 21, 2015, 6:06:49 PM12/21/15
to type-theory...@googlegroups.com
Alex,

I would say that unsolved problems correspond to *possible* lack of
termination (i.e. if you wait long enough, it might terminate, but it
also might not). In general, things like this are called
"semidecidable", and there's a wealth of topological insights to be had
in this area (look up synthetic topology and the sierpinski space if
interested).

The "bottom type" is poorly named, but it just means falsity, and has no
inhabitants—every inhabitant of "bottom" vacuously terminates, because
there are no inhabitants.

best,
Jon
> --
> You received this message because you are subscribed to the Google Groups
> "Type Theory Study Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to type-theory-study...@googlegroups.com.
> To post to this group, send email to
> type-theory...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/type-theory-study-group/38b61284-27e2-48e0-bebe-7e33a9a3a5fb%40googlegroups.com.

Alex Moore - Niemi

unread,
Dec 21, 2015, 6:17:54 PM12/21/15
to Type Theory Study Group
Jon,

Ahh thanks, yes that makes sense. I will check those out. :)

Oh, also, what does dom(Γ) mean? It is referenced in:

Lemma 4.3 (Weakening). If G ` e0 : t0, then G, x : t ` e0 : t0 for any x 2/ dom(G) and any type t

Jon Sterling

unread,
Dec 21, 2015, 6:19:32 PM12/21/15
to type-theory...@googlegroups.com
alex,

no problem! I believe dom(Γ) is the set of variables in the context Γ;
so, dom(x:A,y:B) == {x,y}.

best,
jon
> > > email to type-theory-study...@googlegroups.com
> > <javascript:>.
> > > To post to this group, send email to
> > > type-theory...@googlegroups.com <javascript:>.
> > > To view this discussion on the web visit
> > >
> > https://groups.google.com/d/msgid/type-theory-study-group/38b61284-27e2-48e0-bebe-7e33a9a3a5fb%40googlegroups.com.
> >
> > > For more options, visit https://groups.google.com/d/optout.
> >
>
> --
> You received this message because you are subscribed to the Google Groups
> "Type Theory Study Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to type-theory-study...@googlegroups.com.
> To post to this group, send email to
> type-theory...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/type-theory-study-group/0140423e-3620-4dbb-90da-bbc2776632ef%40googlegroups.com.

Matt Oliveri

unread,
Dec 21, 2015, 8:33:18 PM12/21/15
to Type Theory Study Group
On Monday, December 21, 2015 at 6:06:49 PM UTC-5, Jonathan Sterling wrote:
The "bottom type" is poorly named, but it just means falsity, and has no
inhabitants—every inhabitant of "bottom" vacuously terminates, because
there are no inhabitants.

The bottom type indeed has no inhabitants. But whether that means there are no programs with type bottom depends on the type system. In computational type theory, types classify computations directly, so there are no programs of type bottom. But in most programming languages, types classify values, and a program is judged to have a type when any value it may return has the appropriate type. In this style, bottom contains no *values*, and as Wikipedia says, there can be (and usually are) programs of type bottom, which do not return normally. They may not terminate, or they may terminate by throwing an exception, or maybe even weirder things, depending on the computational effects allowed.

Jon Sterling

unread,
Dec 21, 2015, 8:35:33 PM12/21/15
to type-theory...@googlegroups.com
Great point, Matt! Really wonderfully put, esp. re: whether types
classify values or programs.
> --
> You received this message because you are subscribed to the Google Groups
> "Type Theory Study Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to type-theory-study...@googlegroups.com.
> To post to this group, send email to
> type-theory...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/type-theory-study-group/fd7b8e43-acb0-4406-9158-dda8eeedcf36%40googlegroups.com.

Matt Oliveri

unread,
Dec 21, 2015, 11:21:30 PM12/21/15
to Type Theory Study Group, bigba...@gmail.com
On Monday, December 21, 2015 at 6:02:01 PM UTC-5, Alex Moore - Niemi wrote:
Also I found this a little hard to follow:

Lemma 4.2 (Inversion for Typing). Suppose that G ` e : t. If e = plus(e1; e2), then t = num, G ` e1 : num, and G ` e2 : num, and similarly for the other constructs of the language. 

The context above appears to define inversion in this context as a way of pivoting: "[...] syntax-directed [...] [makes it] easy to give necessary conditions for typing an expression that invert the sufficient conditions expressed by the corresponding typing rule". Via the syntax direction, we can get get the necessary conditions from the sufficient?

Or in other words, this mapping is injective (I'm not clear if it's surjective or not), so it's easy for us to go backwards as well as forwards?

Kind of, if I understand you. A derivation rule always makes the stuff above the line sufficient for the stuff below the line. The fact that the rules are syntax-directed means that for any given expression operator at the root of an expression, there is only one possible derivation rule that could've been used at the root of a typing derivation for that expression. (And also, a derivation for a variable must've used the variable rule.) This means that all the stuff above the line must've been derivable too. So you get admissible rules that are sort-of inverses of the typing rules.

To actually talk about rules as functions (possible injective and/or surjective), we'd need to first say how to consider judgements as sets or types. PFPL doesn't seem to be doing it that way.

Alex Moore - Niemi

unread,
Dec 22, 2015, 2:09:48 PM12/22/15
to Type Theory Study Group
second-order logic is an extension of first-order logic, which itself is an extension of propositional logic.[1]Second-order logic is in turn extended by higher-order logic and type theory.

The above is from Wikipedia. Is this true? And is there a simple table of operators available in the progressively "extended" logics up to type theory? It would be handy for reference for me. 

Alex Moore - Niemi

unread,
Dec 22, 2015, 6:20:21 PM12/22/15
to Type Theory Study Group
Ok, so does that mean

Lemma 4.3 (Weakening). If G ` e0 : t0, then G, x : t ` e0 : t0 for any x 2/ dom(G) and any type t.


translates to "adding another variable of another type has no effect on e' having type tau'"?



> > <javascript:>.
> > > To post to this group, send email to
> > > type-theory...@googlegroups.com <javascript:>.
> > > To view this discussion on the web visit
> > >
> > https://groups.google.com/d/msgid/type-theory-study-group/38b61284-27e2-48e0-bebe-7e33a9a3a5fb%40googlegroups.com.
> >
> > > For more options, visit https://groups.google.com/d/optout.
> >
>
> --
> You received this message because you are subscribed to the Google Groups
> "Type Theory Study Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an

Matt Oliveri

unread,
Dec 22, 2015, 6:35:20 PM12/22/15
to Type Theory Study Group
Elements of base types like natural numbers are sometimes called "individuals". First-order logic has predicates on, and quantification over individuals only. Predicates themselves are not individuals.

In second-order logic, first-order predicates and quantification are available, but also propositions and first-order predicates are now "first-class", so you can have (second-order) predicates on (first-order) predicates, and quantification over propositions and first-order predicates. But second-order predicates are still second-class.

Higher-order logic makes predicates of any order first-class. At some point, other functions (not only predicates) were also made first-class. Usually the simply-typed lambda calculus is used to organize the objects at this point, whereas first- and second-order logic can get away without formal types. It is *not OK* to forget the distinction between different predicate orders, or different types more generally.

So I don't really know how to give a table of operators. It's more like relaxing restrictions on the existing operators than adding new ones.

Note that there's a big difference between HOL's use of types to organize the objects in a logic, on one hand, and propositions-as-types, on the other. In the former, propositions are not (necessarily) types, but are elements of the type of propositions. In the latter, propositions are types (whose elements are proofs).

Hendrik Boom

unread,
Dec 22, 2015, 7:18:11 PM12/22/15
to Type Theory Study Group


On Tuesday, 22 December 2015 14:09:48 UTC-5, Alex Moore - Niemi wrote:
second-order logic is an extension of first-order logic, which itself is an extension of propositional logic.[1]Second-order logic is in turn extended by higher-order logic and type theory.

The above is from Wikipedia. Is this true? And is there a simple table of operators available in the progressively "extended" logics up to type theory? It would be handy for reference for me. 

In first-order logic you get t quantify over individuals.  IN second-order logic, you also get to quantify over predicates.  In third-order logic, you get to quantify over predicates of predicates, and so forth.

In type theories, a predicate on a type A is a function from A to propositions.  So if you have a type of propositions (also known s a type of types) you can build this hierarchy within the type theory.  But there's a catch.  If the type  theory has a type of types (as Per Martin-Lof's original type theory had) you can end up deriving analogues of various set-theoretic paradoxes (as you can in regular set theory with a set of all sets).  I call paradoxes of this kind "paradoxes of overgeneralisation".

Intuitively, in any formal system that allows a lot of generalisation, you have to find a way to prevent things from generalising over themsellves without dome kind of restraint. If not, the edifice becomes not well-founded.  An exreme example of unfoundedness is the proposition Q defined to be not Q.  It is true iff it is false -- an eminently broken concept.  Overgeneralisation tends to provide ways of smuggling circular definitions into the system.

This is perhaps one of the ways that we can tell that the judgement form [type] is not a type.

Still. the restrictions we have to impose on formal system to prevent this trouble always seem artificial to me.  I wish I could formulate a rationale for such restrictions that isn't just "Otherwise you'll have trouble".

-- hendrik

Alex Moore - Niemi

unread,
Dec 24, 2015, 3:13:24 PM12/24/15
to Type Theory Study Group

Lemma 4.4 (Substitution). If G, x : t ` e0 : t0 and G ` e : t, then G ` [e/x]e0 : t0.


Wish the PDF copying was a little more true. (Also any tips on easily writing out logic symbols on mac?)

Was hoping someone could help me read this. "Γ, x : T  \vdash  e' : T'". Translation: in typing context Gamma, with x definition having type Tau "yields", "proves", "satisfies" or "entails"? expression e prime has type Tau prime.

Cyrus Omar

unread,
Dec 24, 2015, 3:35:39 PM12/24/15
to Alex Moore - Niemi, Type Theory Study Group
The turnstile symbol when used in a hypothetical judgement is pronounced "entails". 

Though I usually just pronounce that judgement "assuming x has type T, e' has type T'" or, if you want to explicitly mention the other hypotheses, Γ, "Under any typing context Γ extended with the hypothesis that x has type T, e' has type T'" or more concisely, "Under any Γ where x has type T, e' has type T'". Others might have other preferred pronunciations.

--
You received this message because you are subscribed to the Google Groups "Type Theory Study Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email to type-theory-study...@googlegroups.com.

To post to this group, send email to type-theory...@googlegroups.com.

Joseph Abrahamson

unread,
Dec 24, 2015, 6:20:40 PM12/24/15
to Cyrus Omar, Alex Moore - Niemi, Type Theory Study Group
I always read it "given 'x has type T' and any other hypotheses we have e' having type T".

Alex Moore - Niemi

unread,
Dec 25, 2015, 12:36:23 PM12/25/15
to Type Theory Study Group
 Can someone clarify what maximal means?

On page 61 for me under 5.1 Transition Systems, the book says:

A transition sequence is a sequence of states s0, ..., sn such that s0 initial, and si  U+21A6.svg si+1 for every 0  i < n. A transition sequence is maximal iff there is no s such that sn U+21A6.svg s, and it is complete iff it is maximal and, in addition, sn final. Thus every complete transition sequence is maximal, but maximal sequences are not necessarily complete.

I'm a little confused by this, as reading it backwards, sn couldn't have been the final state in the maximal definition given that is the condition that makes it complete. But what else does "no s such that sn U+21A6.svg s" mean for sn? Other than "if s final, then there is no s' state such that U+21A6.svg s'"?

The only other references I see to this language are:

Matt Oliveri

unread,
Dec 25, 2015, 4:25:30 PM12/25/15
to Type Theory Study Group
Immediately above, it was explained that a state that can't take a step is called "stuck", and stuck states are not necessarily final. So a maximal sequence ends with a stuck state, and it's complete if that stuck state is also final.
Reply all
Reply to author
Forward
0 new messages