Re: [Coq-Club] A criticism to CIC from the point of view of foundations of mathematics (Voevodsky's argument)

214 views
Skip to first unread message

Ken Kubota

unread,
May 3, 2018, 11:22:59 AM5/3/18
to coq-...@inria.fr, meta...@googlegroups.com
Hi Russell,

While your statement (a "proof" always refers to a proof within a certain formal system) is correct looked at in an isolated way,
I find it problematic as it overlooks that the formal systems (deduction systems) in discussion here are designed to represent mathematics.

Of course, this is the standard use of the terminology ("Now we prove the theorem MUIIU." - p. 179),
even in toy systems such as the MIU system, cf. pp. 177 ff. at
http://us.metamath.org/downloads/metamath.pdf

But all the systems in discussion here (e.g., ZFC) implicitly or explicitly claim to represent (significant parts of) mathematics,
making it difficult to follow a pure relativism (denying provability "in any absolute sense").
For example, if one of the systems mentioned previously would be shown to be inconsistent (or otherwise not sound), it would be discarded.

Moreover, I would even argue that some of these formal systems are better than others in representing mathematics,
e.g., type theory is superior to (axiomatic) set theory, and the ideal system should follow the concept of expressiveness of Q0
by Peter B. Andrews, who regularly emphasizes that mathematics should be expressed "naturally".

Hence, there should be an ideal formal system for representing mathematics, and proofs in it would be proofs in an "absolute sense".

I still owe Randy an answer in the HOL mailing list, which I hope to finish soon.

Best regards,

Ken

____________________________________________________


Ken Kubota
http://doi.org/10.4444/100



> Am 03.05.2018 um 02:57 schrieb roco...@theorem.ca:
>
> On Thu, 3 May 2018, Jose Manuel Rodriguez Caballero wrote:
>
>> Russell said:
>> "I also want to point out that proving that PA is consistent doesn't by itself mean that there isn't a proof that PA is inconsistent”
>> Ok, we are not talking the same language. In my world: PA is consistent means that there isn't a proof that PA is inconsistent (a proof that 1=0 formalized inside PA). I’m following
>> Hilbert’s metamathematical standards: https://plato.stanford.edu/entries/hilbert-program/

> Firstly, your statement is incomplete. There isn't a proof according to what system? Please, if you are going to dive into logic like this, don't use the term "proof" unaddorned like this. Mathematical statements are not provable in any absolute sense. Mathematical statements are only provable or not within some deduction system. Coq can prove something, or ZFC can prove something or PA can prove something, and these different systems form completely different classes of statements of what they can or cannot prove.

> Let me reiterate again that "Essential Incompleteness of Arithmetic Verified by Coq" is only claiming that "PA is consistent" is provable in Coq (and this is just a side remark within that paper). This is a claim that you can directly verify yourself by downloading the development and checking it in Coq and seeing for yourself that Coq indeed considers the deduction valid. The paper does not directly claim that "PA is consistent" is true; that would only follow if you assume that Coq is sound, and the paper doesn't discuss the soundness of Coq.

Message has been deleted
Message has been deleted

Ken Kubota

unread,
May 3, 2018, 4:31:55 PM5/3/18
to meta...@googlegroups.com
Hi Jan,

I definitely agree on that ZFC is unsatisfactory: This is why I prefer type theory.

But historically ZFC (axiomatic set theory) was designed exactly for this purpose
in the aftermath of Russell's paradox, see [Zermelo, 1908] (English translation [Zermelo, 1967]).

For the references, please see: http://doi.org/10.4444/100.111

Ken

____________________________________________________


Am 03.05.2018 um 22:08 schrieb Jan Burse <burs...@gmail.com>:

For example Ramsey Cardinal and V=L seem to be two
opposite extensions of ZFC, in some aspects.

So I am very currious how this would look like in metamath,
the other post "Definition of L" form 6. April,

and more...

Am Donnerstag, 3. Mai 2018 22:03:51 UTC+2 schrieb Jan Burse:
I don't agree with what Ken wrote: "But all the systems in discussion
here (e.g., ZFC) implicitly or explicitly claim to represent (significant
parts of) mathematics".

If you read Levy, Basic Set Theory, you see that ZFC has a lot of
holes, and isn't in itself interesting mathematics. Its a foundation you
need to apply and/or extend.

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To post to this group, send email to meta...@googlegroups.com.
Visit this group at https://groups.google.com/group/metamath.
For more options, visit https://groups.google.com/d/optout.

Message has been deleted
Message has been deleted

Ken Kubota

unread,
May 3, 2018, 5:47:59 PM5/3/18
to meta...@googlegroups.com, coq-...@inria.fr
If the criterion is representing all of mathematics, then ZFC is unsatisfactory.

Of course, a language capable of doing so requires dependent types.
Therefore, I have extended Q0 (simple type theory) to R0,
which provides some of the means for dependent types, see pp. 11 f. of

For the definition of the universal quantifier in R0, see p. 359.
(The logical constant 'Q' was replaced by '=', since 'Q' has no own syntactic function.)
This definition simply defines the universal quantifier as the set of properties (sets) in which
all objects of the given type are mapped to true (are element of), hence the proposition holds 'for all'.
Moreover, the variable binders are reduced to a single one (lambda).

I share Bertrands Russell's program of logicism (that mathematics is reducible to formal logic),
hence a sharp distinction between both of them is rather difficult.
A preliminary distinction may be the limitation of the language to rather simple definitions
(such as on pp. 359 f., using basic types like 'o' and similar only; for comparison, see
the rather complex group definition on p. 362).

Types naturally express the distinctions between different kinds of mathematical objects
(e.g., between numbers and operators) made by the mathematician intuitively.
There are some excellent quotes on this in Andrews' 2002 textbook.
Indeed, types are more fundamental than objects of first-order logic, as you,
for example, wish to distinguish between propositions (of type 'o'), operators ('oo'),
and connectives ('ooo' = '((oo)o)'). Type theory naturally expresses the grammar
of formal logic (and mathematics).

In lambda calculus, variables are clearly distinguished from constants (primitive symbols,
lambda abstractions and lambda applications) by the fact that only variables can be
bound by lambda (and by Theorem 5215 [Andrews, 2002, p. 221], instantiated freely).
Lambda calculus allows the reduction of functions of more than one argument to
those of just one argument (implementing Schönfinkel's concept), so "we can avoid explicitly
admitting into our language functions of more than one argument." [Andrews, 2002, p. 206]

All of mathematics can be expressed using just a few principles, which Andrews calls
"expressiveness" (I prefer the term "reducibility").

For the references, please see: http://doi.org/10.4444/100.111

Ken

____________________________________________________

Am 03.05.2018 um 22:52 schrieb Jan Burse <burs...@gmail.com>:

I did NOT say its unsatisfactory. Its very satisfactory,
since its very flexible, and still doesn't have too much
assumptions built in.

Also I don't know what you mean by type theory. Simple
types, dependent types? In ZFC the logic part is delegated
to FOL=, and the domain part is ZFC itself built on the

logic part, thats a very nice separation, how do you apply
your type theory? For example if you define in Q0 the forall
operator (later used in the forall quantifier) as follows:

   Πο(οα) stands for [Qο(οα)(οα) [λxαTο] ]
   https://plato.stanford.edu/entries/type-theory-church/#ForBasEqu

Then things are somehow upside-down. You use types,
a kind of domain, to define logic. How do you justify such
a proceedings?


Am Donnerstag, 3. Mai 2018 22:31:55 UTC+2 schrieb Ken Kubota:
Hi Jan,

I definitely agree on that ZFC is unsatisfactory: This is why I prefer type theory.

But historically ZFC (axiomatic set theory) was designed exactly for this purpose
in the aftermath of Russell's paradox, see [Zermelo, 1908] (English translation [Zermelo, 1967]).

For the references, please see: http://doi.org/10.4444/100.111

Ken

____________________________________________________


Am 03.05.2018 um 22:08 schrieb Jan Burse <burs...@gmail.com>:

For example Ramsey Cardinal and V=L seem to be two
opposite extensions of ZFC, in some aspects.

So I am very currious how this would look like in metamath,
the other post "Definition of L" form 6. April,

and more...

Am Donnerstag, 3. Mai 2018 22:03:51 UTC+2 schrieb Jan Burse:
I don't agree with what Ken wrote: "But all the systems in discussion
here (e.g., ZFC) implicitly or explicitly claim to represent (significant
parts of) mathematics".

If you read Levy, Basic Set Theory, you see that ZFC has a lot of
holes, and isn't in itself interesting mathematics. Its a foundation you
need to apply and/or extend.

Am Donnerstag, 3. Mai 2018 17:22:59 UTC+2 schrieb Ken Kubota:
--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+u...@googlegroups.com.
To post to this group, send email to meta...@googlegroups.com.
Visit this group at https://groups.google.com/group/metamath.
For more options, visit https://groups.google.com/d/optout.
Message has been deleted
Message has been deleted
Message has been deleted
Message has been deleted

José Manuel Rodriguez Caballero

unread,
May 4, 2018, 11:39:22 AM5/4/18
to roco...@theorem.ca, meta...@googlegroups.com

But I can find references to meta-mathematical proofs that PA doesn't prove "0=1" as well.  Why is it that Friedman's et. al.'s proof that RA doesn't prove "0=1" is acceptable to you but Gentzen's proof or Godel's Dialectica proof, or Chapter 8.3 of Shoenfield's book on Mathematical Logic that PA doesn't prove "0=1" are all bad?


Because Gentzen's proof used the fact that any decreasing sequence of ordinals terminates. Friedman's et. al.'s didn't use this fact. I think that you should watch the video, that is all what I know: https://video.ias.edu/voevodsky-80th


2018-05-04 16:15 GMT+02:00 <roco...@theorem.ca>:
On Fri, 4 May 2018, José Manuel Rodriguez Caballero wrote:

      Russell said:
      Again, using what system can you prove that "0=1 cannot be deduced in R#"? You are proving it in ZFC? in Coq? in PA? in R#?
      To go from the claim that R# has a the integers mod 2 as a model to the claim that 0=1 cannot be deduced requires that you perform induction over the class of all deductions in
      R#.


My official answer is that the proof given in the paper 

Friedman, Harvey, and Robert K. Meyer. "Whither relevant arithmetic?." The journal of symbolic logic 57.3 (1992): 824-831.

is meta-mathematical. You approach and the approach of many Coq programmers is: given a formal system, I prove theorems in this system. My approach and the approach of most traditional
mathematicians is: given a statement that I believe it is true, I look for a formal system to formalize it in a convincing way, avoiding fallacies like begging the question. In the case of
foundation of mathematics, both approach collide, this is the reason why foundations of mathematics is a problem and nobody is happy doing it.

But I can find references to meta-mathematical proofs that PA doesn't prove "0=1" as well.  Why is it that Friedman's et. al.'s proof that RA doesn't prove "0=1" is acceptable to you but Gentzen's proof or Godel's Dialectica proof, or Chapter 8.3 of Shoenfield's book on Mathematical Logic that PA doesn't prove "0=1" are all bad?


--
Russell O'Connor                                      <http://r6.ca/>
``All talk about `theft,''' the general counsel of the American Graphophone
Company wrote, ``is the merest claptrap, for there exists no property in
ideas musical, literary or artistic, except as defined by statute.''

José Manuel Rodriguez Caballero

unread,
May 4, 2018, 12:15:21 PM5/4/18
to roco...@theorem.ca, meta...@googlegroups.com
What if I told you that Friedman's et. al's proof that "RA doesn't prove
'0=1'" also doesn't satifiy Hilbert's standards?

I will be surprised. I'm not an authority in this subject. I'm only responsible for my own publications in journals and if someone criticizes me for informal arguments I will answer him that I am not doing foundations of mathematics, but just regular mathematics. If someone criticizes me for a genuine mathematical mistake in number theory or a counterexample, I will recognize him my mistake and math will go on.


2018-05-04 18:03 GMT+02:00 <roco...@theorem.ca>:
On Fri, 4 May 2018, José Manuel Rodriguez Caballero wrote:

      Russell said
      What about Goedel Dialectical proof and Shoenfield's (similar) proof?
      Also every decreaseing sequence of ordinals does terminate.  Pick up any book on the theory of ordinals to find such a proof.


I didn't read those papers, my area of expertise is number theory, not foundations of maths. The proofs that every decreasing sequence of ordinals does terminate, even in CiC, assumed
other hypothesis which does not satisfy Hilbert standards. The same for hydra battles.

What if I told you that Friedman's et. al's proof that "RA doesn't prove
'0=1'" also doesn't satifiy Hilbert's standards?

José Manuel Rodriguez Caballero

unread,
May 4, 2018, 3:09:37 PM5/4/18
to roco...@theorem.ca, meta...@googlegroups.com
Thank you by this observation. It will be nice to formalize the consistency of R# in Coq, but "maybe" there will be some foundational problems. In my case, I'm just focus on CategoryTheory from UniMath in order to formalize my favorite model of arithmetic: 

Moerdijk, Ieke. "A model for intuitionistic non-standard arithmetic." (1995). http://repository.ubn.ru.nl/bitstream/handle/2066/129065/129065.pdf?sequence=1

I'm trying to formalize in Coq the unprovability of Goodstein's theorem in higher-order Heyting arithmetic, but I'm following a topos theoretic approach.



2018-05-04 19:32 GMT+02:00 <roco...@theorem.ca>:
Using quantifier's in R#'s induction principle maybe isn't so much of a problem because the proof that 0=1 holds in a finite model means that those quantifier are intepreted over a finite domain.  At this point you can perhaps get away with ordinary induction over the class of all deductions and not need anything beyond Hilbert's standards.

Okay, now I find it plausiable Friedman's et. al.'s proof satifies "Hilbert's standards".


On Fri, 4 May 2018, roco...@theorem.ca wrote:

You should study Friedman's et. al.'s proof carefully then, because I think you are in for a surprise.

Ken Kubota

unread,
Jun 2, 2018, 10:33:48 AM6/2/18
to meta...@googlegroups.com, HOL-info list, cl-isabe...@lists.cam.ac.uk, John Harrison, Prof. Peter B. Andrews
John Harrison has recently (in March 2018) proposed a return to set theory
in a presentation with the title "Let’s make set theory great again!"
available at
http://aitp-conference.org/2018/slides/JH.pdf
(mentioning the inflexibility of the type systems of HOL and Isabelle/HOL)
which I would like to comment, and use this occasion to reply to Jan Burse
concerning Q0/R0 vs. ZFC as a foundation of mathematics.


1. Type Theory vs. Set Theory (Reply to Jan Burse)

Both Russell's type theory and Zermelo's set theory were published in 1908
to solve the problem with Russell's antinomy causing inconsistency.
In order to decide which approach is appropriate, the philosophical reason
for this antinomy and the formal system's adequateness for representing
mathematics should be taken into account.

Philosophically, Russell correctly stated that self-reference is responsible
(but didn't recognize that negation is the second component for an antinomy:
the set of all sets that do NOT contain THEMSELVES).

Mathematically, Russell draw the correct conclusion and drafted a formal system
(type theory) in which the language doesn't allow one to express such constructions.
In other words, the syntactic means of the language are the key to the solution.
This means that all of mathematics can be expressed, and only those
constructions involving self-reference and negation are excluded.

Set theory (e.g., ZFC, the variant of Zermelo set theory in use today)
takes a very different approach by giving the class status by default,
and giving the set status only to those constructions often used.
Compared with the type theoretic approach this means that the set theoretic
solution is to simply forbid all reasoning first, and then to allow only certain
constructions (a positive list).
The result (ZFC) is a not systematic solution of the problem (Russell's antinomy),
but a preliminary solution based on the explored fields of mathematics at that time.
For example, ZFC is not sufficient to handle large cardinals (which require an
additional axiom), and it should be easy to construe other examples not covered
by ZFC by introducing other axioms.
This means that the (from a logical point of view) arbitrary set of axioms in ZFC
(containing both logical and non-logical axioms) can cover only parts of mathematics,
but not (all of) mathematics in general, and, furthermore, this limitation cannot
be circumvented by adding further axioms (resulting in another arbitrary set of axioms).

A systematic solution can consist only in shaping the syntactic means of the
formal language (finding the grammar of mathematics) such that they correctly
preserve dependencies and, hence, prohibit constructions involving both
self-reference and negation.
Jean van Heijenoort has described both approaches (type theory and set theory)
properly: "The two responses are extremely different; the former is a far reaching
theory of great significance for logic and even ontology, while the latter is an
immediate answer to the pressing needs of the working mathematician.”
[Heijenoort, 1967c, p. 199] - http://www.owlofminerva.net/files/fom.pdf

Note that Q0 and R0 have the five logical axioms only
https://plato.stanford.edu/entries/type-theory-church/#ForBasEqu
http://doi.org/10.4444/100.3 (pp. 353 ff.),
and non-logical axioms such as the three axioms of group theory are not
introduced as axioms, but are expressed in the form of a definition:
http://doi.org/10.4444/100.3 (p. 362).
The introduction of any further non-logical axioms (in the form of a definition)
will not cause the inconsistency of the system, but only prove the emptiness
of the set obtained by this definition (since no mathematical entity can
satisfy contradictory properties).


2. Type Theory vs. Set Theory (Reply to John Harrison)

John Harrison correctly states that the current type systems suffer either from
inflexibilities (HOL, Isabelle/HOL) or from too complex rules (Coq) in a
presentation from March with the title "Let’s make set theory great again!"
available online at
http://aitp-conference.org/2018/slides/JH.pdf

The same critique was already expressed by Freek Wiedijk, see
section 3 of http://www.owlofminerva.net/files/fom.pdf

In a private email from January, John considered both HOL as well as
the axiomatic type classes in Isabelle as not solving the problem due to
the "lack of quantification over types in HOL".

Up to this point I fully agree, but I clearly disagree with set theory
being the solution, since it would be nothing else than a fallback
to a workaround (ZFC) based on a rather arbitrary set of axioms
involving non-logical axioms, too, and incapable of systematically
representing all of mathematics; van Heijenoort: "immediate answer
to the pressing needs of the working mathematician."

The inflexibilities mentioned above only show that the syntactic means
of the language (of current type theory, i.e., HOL and Isabelle/HOL) are
not developed enough to fully represent all of the dependencies.

But the problems mentioned by John can be easily solved with explicit
quantification over types (i.e., binding type variables with lambda),
as done in R0:

1. John: "For example, if you prove something generic about groups over a type,
you may not be able to instantiate it later to a group over a subset of a type." (p. 21)

In R0, any non-empty set (hence, any non-empty subset) is
automatically considered (and introduced as) a type (cf. p. 11 of
http://www.owlofminerva.net/files/formulae.pdf ),
and instantiating general properties of a group to a specific group is actually
the main example for demonstrating the power of quantification over types
in R0, cf. pp. 12 (explanation), 362 (definition of group). The uniqueness of
the identity element (p. 421, GrpIdElUniq) is used in combination with fact
that <o, XOR> is a group (p. 422, theorem "Grp o XOR") to obtain
by substitution (without having to carry out the proof of uniqueness of the
identity element in a group again) the uniqueness of the
identity element of the specific group <o, XOR> (p. 423, XorGrpIdElUniq).

2. John: "Thinking of F as a base type, we have jumped up a couple of levels
in the type hierarcy just to adjoin one root.
If we want to construct the algebraic closure of a field we have to do
this transfinitely ..." (p. 24)

This has already been discussed in July 2017 at
https://sourceforge.net/p/hol/mailman/message/35962259/
A transfinite type theory Q was presented by Andrews in his 1965 PhD thesis
(based on his Q0 first published in 1963).
R0 avoids the stratification (the type hierarchy) of Q by its flexible handling of types,
allowing top-down definitions (similar to ZFC) rather than just the classical bottom-up
type hierarchy, since any non-empty set is a type in R0.

The key feature of R0 is internal type referencing, which preserves the
dependencies when quantifying over types (or, more exactly: binding type
variables with lambda). A quote on this is attached below.

Let me finally note that there is a minor flaw in John's presentation.
Metamath is, like Isabelle, a logical framework, and capable of coding
type theory, too. There is an implementation of Q0 in Metamath:
https://github.com/tirix/q0.mm
(For further links, see 1.1. in http://www.owlofminerva.net/files/fom.pdf )

Therefore, on pp. 6 and 7 there should be written
Simple type theory (HOL family, Isabelle/HOL, Metamath/Q0)
instead of
Simple type theory (HOL family, Isabelle/HOL)

and on p. 7 there should be written
Metamath/ZFC
Isabelle/ZF (but much less popular than Isabelle/HOL)
instead of
Metamath
Isabelle/ZF (but much less popular than Isabelle/HOL)

Ken

____________________________________________________


Ken Kubota
http://doi.org/10.4444/100



## Internal type referencing
## Note that the universal quantifier has type '{{o,{o,\3{^}}},^}', or written
## without brackets, type 'o(o\3)^':
## Command: A

# wff 29 : [\t.[\p.((=_[\x.T])_p)]] := A
# w typd 29 : [\t{^}{^}.[\p{{o,t{^}}}{{o,t{^}}}.((={{{o,{o,t{^}}},{o,t{^}}}}_[\x{t{^}}{t{^}}.T{o}]{{o,t{^}}}){{o,{o,t{^}}}}_p{{o,t{^}}}{{o,t{^}}}){o}]{{o,{o,t{^}}}}]
# { {{o,{o,\3{^}}},^} }
# := A

## The specification '\3' is a reference within the type, pointing to the type
## '^' of the first argument within 'o(o\3)^', thus handling dependencies of
## types within the formula structure similar to de Bruijn indices. For
## example, the application of 'ALL' of type 'o(o\3)^' to '@' of type '^'
## would result in the wff 'ALL @' of type 'o(o@)'.
## Similarly, 'ALL NAT' would have type 'o(o NAT)'.

http://www.owlofminerva.net/files/tutorial.r0i.out.txt
June 2, 2018



> Am 04.05.2018 um 03:43 schrieb Jan Burse <burs...@gmail.com>:
>
> What is your "satisfaction" criterion exactly when you for
> example compare R0 and ZFC. This is a little vague to
> refere first to "reducibility" from Andrew, and then replace
> it by your own "expressiveness".
>
> Does this have something to do with recursion theory?
> Why do you think R0 and ZFC have different "expressiveness",
> in what respect exactly? Do you consider ZFC with our without
> classes. Levy, Basic Set Theory and also metamath,
>
> which is closely modelled after Levy, Basic Set Theory classes,
> both have ZFC with classes. In ZFC with classes you
> have something like beta reduction, for example:
>
> x in { y | P(y) } <=> P(x)
>
> http://us.metamath.org/mpegif/df-clab.html
>
> This is pretty much the same as beta-conversion:
>
> (lambda y.P(y))x = P(x)
>
> So there is not much difference between R0 and ZFC with classes.
> So why do you think one is superior to the other? Did you
> make a systematic research based on the actual
>
> underpinnings of ZFC with classes,
> respectively its metamath incarnation?

José Manuel Rodriguez Caballero

unread,
Jun 2, 2018, 11:27:48 AM6/2/18
to meta...@googlegroups.com, John Harrison, Prof. Peter B. Andrews
Dear Ken,
  In which of the systems that you mentioned it is possible or more natural to formalize emergent properties, e.g., the heap from sorites paradox?

Kind Regards,
José M.


> To unsubscribe from this group and stop receiving emails from it, send an email to metamath+unsubscribe@googlegroups.com.

> To post to this group, send email to meta...@googlegroups.com.
> Visit this group at https://groups.google.com/group/metamath.
> For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email to metamath+unsubscribe@googlegroups.com.

Ken Kubota

unread,
Jun 2, 2018, 1:00:58 PM6/2/18
to meta...@googlegroups.com, John Harrison, Prof. Peter B. Andrews
Dear José,

The sorites paradox (paradox of the heap) cannot be expressed in any of these systems,
I believe, since it exploits vagueness which is not embodied in the (very exact)
formal languages of the aforementioned logics.

The classical antinomy as discussed in philosophy and mathematics is based on
two components: self-reference and negation.

I'm rather sceptical on attempts to create a connection between these two phenomena
(the classical antinomy and the sorites paradox), as discussed at
https://plato.stanford.edu/archives/win2014/entries/sorites-paradox/#4

Kind regards,

Ken

____________________________________________________


Ken Kubota
http://doi.org/10.4444/100



José Manuel Rodriguez Caballero

unread,
Jun 2, 2018, 8:42:25 PM6/2/18
to meta...@googlegroups.com, John Harrison, Prof. Peter B. Andrews
Ken said

The sorites paradox (paradox of the heap) cannot be expressed in any of these systems,
I believe, since it exploits vagueness which is not embodied in the (very exact)
formal languages of the aforementioned logics.

Vagueness is an essential part of human experience. For example, "self-awareness" is a vague concept, because it is not clear how many neurons does a human being need to be "self-aware". I think that the denying of the "self" in Sam Harris approach is a reflect of a way of thinking in a logical system which is unable to express emergent properties.

Alain Badiou uses limit ordinals in order to express his notion of event: https://www.youtube.com/watch?v=0grWeEUKYlE

It seem to me that "event" is a vague notion.

Is Badiou's informal description of its notion of event consistent with its mathematical identification with limit ordinals? 

Is the mathematical identification of event with non-standard natural numbers closer to its informal definition than its identification with limit ordinals?

Kind Regards,
José M.

Ken Kubota

unread,
Jun 5, 2018, 4:28:54 PM6/5/18
to Mario Xerxes Castelán Castro, HOL-info list, meta...@googlegroups.com, Prof. Peter B. Andrews
Dear Mario,

In order to preserve the font and layout information required to display
the formulae properly, let me copy to the Metamath list.


Am 05.06.2018 um 01:22 schrieb Mario Xerxes Castelán Castro <mario...@yandex.com>:

On 02/06/18 09:33, Ken Kubota wrote:
Set theory (e.g., ZFC, the variant of Zermelo set theory in use today)
takes a very different approach by giving the class status by default,
and giving the set status only to those constructions often used.
Compared with the type theoretic approach this means that the set theoretic
solution is to simply forbid all reasoning first, and then to allow only certain
constructions (a positive list).
The result (ZFC) is a not systematic solution of the problem (Russell's antinomy),
but a preliminary solution based on the explored fields of mathematics at that time.

ZFC is the formalization of the intuitive concept of the cumulative
hierarchy of sets. It is not just a collection of ad-hoc axioms.

The word "intuitive" is the keyword here, exposing the concept (the cumulative
hierarchy of sets) as a subjective one.
But the criterion should be only the original problem, i.e., Russell's paradox
(the classical philosophical antinomy).
Only this (or maybe other forms of paradoxes) should be avoided,
but any whitelist inevitably rules out much more, resulting in an arbitrary list
of axioms (from a systematic point of view).


There is a mental trick at play with type theories. They only seem to be
more natural than ZFC because the domain of quantification is hidden by
default in many proof assistants based on type theories. If you did the
same with ZFC, it would likewise be perceived as natural. If you build a
proof assistant for set theory you _can_ introduce syntactical sugar for
restricted quantification as in type theory and automatic “typing” (sets
used as the quantification domain are inferred instead of being made
explicit).

This holds only for type theory _without_ explicit quantification over types (e.g., HOL, Q0),
but not for type theory _with_ explicit quantification over types (e.g., R0).

For example, while the principle of the excluded middle (tertium non datur)
in Q0 is expressed as Axiom 1
[gT ∧ gF] = ∀x[gx]
[gοοTο ∧ gοοFο] = ∀xο[gοοxο]
in R0 it is expressed as
[gT ∧ gF] = ∀ox[gx]
[gοοTο ∧ gοοFο] = ∀o(o\3)τoτxο[gοοxο]
– Note that the type o is specified at the term level here, and not only
at the type level (as type information) –
since the universal quantifier is defined with _type abstraction_ in R0, as
[λtτ .[λpot.(=o(ot)(ot)[λxt.To]pot)o](o(ot))]
where t ranges over the domain of types (tau: the type of types).
Binding over type variables with lambda creates a link between the term level
and the type (subscript) level, which makes type (domain) information explicit.

A special example is the Quantified Axiom of Choice (without a free type variable)
∀tτ[AC]
or – in more explicit notation with lambda –
τ[λtτ.AC]
where the domain of the variable t is the type of types τ:


For example, ZFC is not sufficient to handle large cardinals (which require an
additional axiom), and it should be easy to construe other examples not covered
by ZFC by introducing other axioms.

It has large enough cardinals for all ordinary purposes. The “large” in

Again, the term "ordinary" is the keyword here.
Any definition of it remains vague, showing that the "solution" is just
a preliminary solution and not the result of a systematic approach
(which should address nothing else than Russell's paradox,
not some concepts emphasizing the notion of set).


“large cardinal axioms” is meant to be interpreted in a relative sense.
Adding large cardinals axioms typically gives a model of ZFC, or
ZFC+large (but not as large) cardinal. You can even find a diagram of
which large cardinal axioms imply the consistency of which ZFC+not so
large cardinals. The issue thus is consistency. There is no way to
escape that situation by the well known results of Tarski on
undefinability of arithmetical truth and Gödel on consistency and
completeness.

I'm not sure what you are aiming at here, as quite different topics are
mentioned.

Concerning consistency, if one allows logical axioms only
(the five axioms of Q0 or R0), requiring all other (non-logical) "axioms"
(e.g., those of group theory) to be introduced in the form of a definition,
the logic itself never becomes inconsistent, but instead the defined
set proves to be empty (which is the desired result).

Concerning (in-)completeness, while Gödel's formal proof is correct,
the philosophical interpretation as incompleteness depends on the
semantic approach, as argued at
If one follows a purely syntactic approach (as I do), the meaning of
Gödel's theorem is of little philosophical significance, since with
a syntactic approach, completeness is given by definition (syntax = semantics).
In particular, I reject all kinds of metatheory as a foundation
(e.g., the semantic approach, where a mathematical theory is
used to define mathematical truth).

Concerning the definability of mathematical truth, my argument is that
there are two kind of mathematical theorems: simple theorems (e.g.,
the theorems of the basic formal system such as Q0/R0), or
metatheorems resulting from studying the properties of the system,
see


We do not have, say, a set of all ordinals, or of all cardinals, that is
because it is incompatible with the separation axiom, so any set theory
which has an universal set and separation is inconsistent. Jensen’s
version of New Foundations (NFU) has an universal set and a set of all
ordinals and moreover can be proved consistent in ZFC (even less than
that). The downside is of course, that the separation axiom does not hold.

The question is why the combination of the universal set and
the separation axiom (such as the ZF Separation Schema, see
results in inconsistency.
The reason is that without type restrictions one can establish a set definition
combining the two properties of an antinomy (self-reference and negation).
With type restrictions, one cannot.
I would argue that R0 has both a universal set (see below) and some kind
of separation schema
but is consistent since it has the type restrictions that rule out the antinomy.


Higher order logic has the counter-part of the axiom of separation in
set theory, but it does not have a real universal set. It only has, for
each type, the set of all elements of all type, and that is called
“universal”, but that is a bit of a misuse.

R0 does have a real universal set: V
Reply all
Reply to author
Forward
0 new messages