Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Automatic Theorem Prover

57 views
Skip to first unread message

ce...@netvisaum.pt

unread,
Aug 13, 2002, 2:03:59 PM8/13/02
to
Hello.
I'm initiating a new project about ATP. I would like to make a new ATP,
capable to be used in any branch of math, but, to reach that objective, i
need to use some sort of language to express axioms, definitions, etc. I
think the most common used language is First Order Logic (correct me if
i'm wrong) but i don't know if this is suficient.
So, what logic should i use that is expressive enough and permites
inference?


Hans Aberg

unread,
Aug 13, 2002, 3:38:52 PM8/13/02
to
In article
<Pine.LNX.4.44.02081...@pu-217-129-13-4.netvisao.pt>,
ce...@netvisaum.pt wrote:

It really depends what your purpose and intent is: If your intent is to
merely produce a programming language, then the Prolog/CLP systems in
existence are using first order logic. -- When they speak about "higher
order" in this context it really means lambda calculus and such.

But if your intent is to be able to use something useful in pure math,
then the first order logic is not of any use: You must use second order
logic in order to be able to capture actual mathematical usage.

As for the actual implementation, it really belongs to groups like
comp.lang.prolog and comp.constraints; read their FAQ's. And if you want
to implement your own full compiler, also check comp.compilers and its
FAQ.

Most actual "theorem proving" systems only use first order logic, and
therefore are not usable for proving typical math theorems without actual
tweaking into first order logic. See for example the Otter system:
http://www-unix.mcs.anl.gov/AR/otter/
and its manual.

If you want to implement second order logic, due to Goedel incompleteness,
it is not possible to do that directly. So the strategy then is to make a
modified Prolog engine into a meta-logical theorem verifier. This is the
approach used by the Qu-Prolog and Ergo systems. The former is the
meta-system which is used by the latter to implement actual math. See
Qu-Prolog:
http://www.svrc.it.uq.edu.au/pages/QuProlog_release.html
Earlier versions
http://svrc.it.uq.edu.au/Bibliography/svrc-tr.html?00-20
http://svrc.it.uq.edu.au/Bibliography/svrc-tr.html?00-21
Ergo
http://www.svrc.it.uq.edu.au/pages/Ergo_release.html
The built-in Ergo theories jprop and iprop are based on
Gentzen - "Investigations into Logical Deductions"
Heyting - "Die Formalen Regeln der Intuitionistischen Logik"
Heyting - "Intuitionism: An introduction"

If you want to save you some time on the implementation of the actual
modified Prolog system, the using of Qu-Prolog may save you some time. You
might then implement say a non-intuitionistic system.

Some other references on theorem proving:
http://www-unix.mcs.anl.gov/AR/others.html
http://www.afm.sbu.ac.uk/
http://www.cse.psu.edu/~dale/lProlog/

Hans Aberg * Anti-spam: remove "remove." from email address.
* Email: Hans Aberg <remove...@member.ams.org>
* Home Page: <http://www.matematik.su.se/~haberg/>
* AMS member listing: <http://www.ams.org/cml/>

Laurent Oget

unread,
Aug 13, 2002, 3:46:38 PM8/13/02
to
ce...@netvisaum.pt writes:

this is not an easy task.

i would look at http://pauillac.inria.fr/coq/ for starters.

--
Laurent Oget, Ph.D. lau...@oget.net http://oget.net
Senior Engineer Zvolve Systems Inc http://zvolve.com
Chercheur Associé Liafa http://liafa.jussieu.fr

tc...@lsa.umich.edu

unread,
Aug 13, 2002, 4:33:55 PM8/13/02
to

The possibilities are endless. I'd recommend looking at what people have
already done in this area and planning accordingly. See for example

http://www-unix.mcs.anl.gov/qed
http://www-formal.stanford.edu/clt/ARS/ars-db.html
--
Tim Chow tchow-at-alum-dot-mit-dot-edu
The range of our projectiles---even ... the artillery---however great, will
never exceed four of those miles of which as many thousand separate us from
the center of the earth. ---Galileo, Dialogues Concerning Two New Sciences

John Harrison

unread,
Aug 14, 2002, 6:10:51 PM8/14/02
to
ce...@netvisaum.pt wrote in message news:<Pine.LNX.4.44.02081...@pu-217-129-13-4.netvisao.pt>...

Many different choices are reasonable. First order logic, and several
varieties of higher order logic, are all successfully used in major
theorem proving systems. It would be worth looking at some existing
systems and getting an idea how they are used before making a decision.
You'd also probably find some valuable ideas for the design and
architecture of your own ATP.

A general discussion of formalized mathematics, with some discussion on
the choice of foundational system, can be found in an old paper of mine:

http://www.cl.cam.ac.uk/users/jrh/papers/form-math3.html

As well as the basic "order" of a logical system, two very significant
attributes, discussed in this paper, are

* How to deal with definitions

* How to deal with "undefined" terms like 1/0, tan(pi/2) etc.

To get a feel for the range of systems out there and how they are used,
I'd second the pages recommended by Tim Chow. The list of systems here:

http://www-formal.stanford.edu/clt/ARS/systems.html

will give you an overview of some of the major systems around, and there
is some discussion of formalized mathematics at:

http://www-unix.mcs.anl.gov/qed

I use a theorem prover for higher-order logic (HOL Light) to do formal
correctness proofs, but this isn't necessarily the system I'd recommend
for use in pure mathematics. Among others, I'd certainly look at Mizar,
which has been used for extensive formalization of mathematics:

http://web.cs.ualberta.ca/~piotr/Mizar/

Mizar uses first order logic to make deductions from the axioms for
Tarski-Grothendieck set theory. (I don't think it would make a big
difference in most work to use a more orthodox system like ZFC.) Its
formulation of first-order logic permits the schematic instantiation of
predicates, so is sometimes called "1.01 order logic", but although not
typically found in current logic texts, such setups are quite standard.
(If I remember right, the system presented in Hilbert & Ackermann and the
system for which Goedel first proved incompleteness both had such rules.)

* * * * * * *

Hans Aberg's reply contained what seem to me two seriously misleading
exaggerations:

| But if your intent is to be able to use something useful in pure math,
| then the first order logic is not of any use: You must use second order
| logic in order to be able to capture actual mathematical usage.

This is not really true. One can perfectly well work using first order
axioms for sets, as Mizar does. Then functions and other "higher-order"
objects are simply sets. Indeed, insofar as mathematicians think about
logical foundations at all, this seems to be what they usually have in
mind (see e.g. the early material in Bourbaki). And the usual deductive
systems for higher order logic can be formulated in first order terms, as
shown in detail in several standard logic textbooks. Certainly, higher
order logic has some significant advantages, and I'm often very glad to
be using it. But it's *not* obligatory to do real mathematics.

| If you want to implement second order logic, due to Goedel
| incompleteness, it is not possible to do that directly. So the
| strategy then is to make a modified Prolog engine into a meta-logical
| theorem verifier. This is the approach used by the Qu-Prolog and Ergo
| systems. The former is the meta-system which is used by the latter to
| implement actual math.

This represents one interesting but not necessarily representative strand
of research. The implication that this particular Prolog-based
metatheoretic approach is necessary in the light of incompleteness
phenomena is an absurd non-sequitur.

One may simply observe that typical higher-order deductive systems (say,
those complete for Henkin's general models), while theoretically
incomplete, are likely to be adequate for most mathematics one actually
cares about formalizing. And in other cases one might just present
theorems as implied by additional axioms, rather than getting involved in
metatheory.

John.

Hans Aberg

unread,
Aug 15, 2002, 6:44:19 AM8/15/02
to
In article <c616467c.02081...@posting.google.com>,
jo...@ichips.intel.com (John Harrison) wrote:

>Hans Aberg's reply contained what seem to me two seriously misleading
>exaggerations:
>
>| But if your intent is to be able to use something useful in pure math,
>| then the first order logic is not of any use: You must use second order
>| logic in order to be able to capture actual mathematical usage.
>
>This is not really true. One can perfectly well work using first order
>axioms for sets, as Mizar does.

This is really a statement from the Otter manual:
Otter: http://www-unix.mcs.anl.gov/AR/otter/

Lets take an example, the induction axiom:
(1) P(0) and (P(n) => P(n')) => P(x).
(2) all x: P(0) and (P(n) => P(n')) => P(x).
(3) all P, all x: P(0) and (P(n) => P(n')) => P(x).
In all these variations, P will be a variable, no matter how you would
want to term it in logical "n-order" terms.

First order computer systems do not admit the predicate P to be variable.
So one will have to introduce a new predicate, say N(P, x) and write say
(4) N(P, 0) and (N(P, n) => N(P, n')) => N(P, x).

Even if some may think that is workable, one could not say that this is a
computer implementation adapted to the human needs, as that simply is not
the style in math.

> Then functions and other "higher-order"
>objects are simply sets.

Is this formally correct axiomatic set theory then?

-- The reason I landed on the variation where the Prolog-like system
serves as the meta-logical deduction system rather than the logical system
was that I started out with the naive variation of interpreting predicates
as "sets". Even though conceptually efficient, this does however not lead
to a formally correct axiomatic set theory, and leads to a number of
difficulties like when trying interpretating "=" with respect to
unification.

Thus I changed to the meta-level approach, and it happens this is the way
Qu-Prolog approaches the problem. So my current feel is that this is the
correct approach.

-- When implementing these things, I first translated the Mini-Prolog that
comes with the Haskell interpreter Hugs http://haskell.org/hugs into C++,
adding a Flex/Bison based parser. Then I use that for structured
implementation experimentations.

It is not particularly difficult to add a higher-order system, so I do not
see why one should not do that, as that is the leads to the style that one
is used in math.

> Indeed, insofar as mathematicians think about
>logical foundations at all, this seems to be what they usually have in
>mind (see e.g. the early material in Bourbaki).

So what is your conception of how mathematicians think about the induction
axiom: Is it like (1)-(3) above or more like (4)?.

> And the usual deductive
>systems for higher order logic can be formulated in first order terms, as
>shown in detail in several standard logic textbooks. Certainly, higher
>order logic has some significant advantages, and I'm often very glad to
>be using it. But it's *not* obligatory to do real mathematics.

The real problem is to get a system that does not put the burden on the
user of the program. Pure math is too complicated as it is.

Even if some kind of reductions can be formally made, it is not the same
thing as it is going to lead to a practical system useful to working
mathematicians, as these humans that can do an efficient job pretty much
by recognizing mathematical structures: Formal restructuring of logical
expressions destroy that structure.

So some things like prenex normal form and Skolemization may look as good
shortcuts for a computer implementation, but will make the system quickly
unusable for serious math.

Also, some things like replacing all x, exist y: P(x, y) with exist y, all
x: P(x, y(x)) can be done if one assumes the axiom of choice and that the
variables ranges over sets, but one is not always doing such assumptions
even in standard math:

So if a computer system should be usable for pure math, it must be able to
capture such subtleties. This then leads to the meta-logical approach that
can capture such things.

>| If you want to implement second order logic, due to Goedel
>| incompleteness, it is not possible to do that directly. So the
>| strategy then is to make a modified Prolog engine into a meta-logical
>| theorem verifier. This is the approach used by the Qu-Prolog and Ergo
>| systems. The former is the meta-system which is used by the latter to
>| implement actual math.
>
>This represents one interesting but not necessarily representative strand
>of research. The implication that this particular Prolog-based

Please note the difference between "Prolog based" and a "modified Prolog
engine":

By the latter term, I meant the main ingredients: a way to recognize and
match terms via unification (how this now is defined), and a way to search
out the terms.

The original Prolog engine is characterized by a stack based term search,
I do not mean to require such a thing: One can easily admit hybrids with
different search techniques working side by side.

>metatheoretic approach is necessary in the light of incompleteness
>phenomena is an absurd non-sequitur.

Well, as I said above, the naive approach of interpreting the predicates
as sets does not lead to a formally correct theory.

Therefore, if the idea is to produce a math system which is guaranteed to
be logically correct, it seems best to follow the exact definitions of
formal logic.

Otherwise, the incompleteness theorem says that it is not possible to make
a fully automatic theorem proving system. From the point of view of
practical implementation, one hits this problem much earlier in the form
of the rather enormous axiom redundancy that mathematicians are used to,
which causes automated systems to end up in non-termination (infinite
loops). -- And even long, finite searches may be prohibitive in an
automated system.

Human can easily live with this redundancy by discriminating between
useful and useless proof paths. This is essentially an AI (artificial
intelligence) problem.

>One may simply observe that typical higher-order deductive systems (say,
>those complete for Henkin's general models), while theoretically
>incomplete, are likely to be adequate for most mathematics one actually
>cares about formalizing. And in other cases one might just present
>theorems as implied by additional axioms, rather than getting involved in
>metatheory.

My own experience with experimenting in implementing these kinds of
systems is that a system that disregards meta-logical theory does not lead
to a formally correct mathematical theory.

Also, the problems with the Prolog clause
B :- A.
with respect to negation can be easily understood if it is interpreted as
a meta-logical provability
A |- B.

I look at Qu-Prolog now, and it seems me that this approach is similar to
that of what I experiment with. So my feeling right now is that this is
the right approach.

One advantage with this meta-theoretical approach is that one needs not be
restricted to the formal logical framework, but can give it a more exact
implementation. For example, the Modus Ponens can be implemented as
A, A => B |- B.
where now "=>" is a 2-ary predicate, just as in formal logic. In
(higher-order) Prolog, this would be
B :- A, A => B.
or
B :- A, to(A, B).

When plugging this into a computer, the problem is that the variable "B"
will match anything. So the computer system will end up in
non-termination.

Therefore, one needs some way to cut off the number of possibilities: In
math, that is used in the form of a "proof" when one says that "A is
provable by axioms ...".

So I am now experimenting with implementing such proofs. Then the original
Prolog term search engine comes in quite nicely, as it helps the writer of
the proof: One does not need to be overly specific in detailing how the
axioms should be applied, because that will be searched out by the
computer.

I believe less in other ways of automating proofs, like Prolog cuts,
because there are too many clever ways to carry out such proofs: The
automation process tends to lead to unusable over-simplifications. Clever
new proofs often consists of that one does not apply well-known facts
immediately, but throws in some clever manipulations first, and then
applies these facts. I do not see how an automated system can capture that
kind of creativity.

In fine, this leads to a style which I with my experience as a
mathematician feel is right. And in the end, I think that is what matters,
that it is an easily used and accurate system.

TheBean

unread,
Aug 15, 2002, 11:34:45 AM8/15/02
to
jo...@ichips.intel.com (John Harrison) wrote in message

> Hans Aberg's reply contained what seem to me two seriously misleading
> exaggerations:
>
> | But if your intent is to be able to use something useful in pure math,
> | then the first order logic is not of any use: You must use second order
> | logic in order to be able to capture actual mathematical usage.
>
> This is not really true.

In fact, there are several interesting mechanical proofs
of concepts that I would consider "pure math" that use only
first order logic. In particular, Shankar's proof of Goedel's
incompleteness theorem and the Church-Rosser theorem using
the Boyer-Moore theorem prover.

http://www.csl.sri.com/users/shankar/goedel-book.html

To the original poster, I would also suggest looking at
PVS, which provides a formalism of higher order logic with
types:

http://pvs.csl.sri.com/

and at ACL2, a follow-on to the Boyer-Moore theorem prover.
ACL2 provides a first order logic in which applicative
common lisp is used as a specification language.

http://www.cs.utexas.edu/users/moore/acl2/

While these systems represent two very different philosophies
towards theorem proving, they have both been successfully
employed in a variety of large-scale verification efforts.

Dave

Hans Aberg

unread,
Aug 15, 2002, 4:08:43 PM8/15/02
to
In article <383d1d49.02081...@posting.google.com>,
TheBe...@yahoo.com (TheBean) wrote:

>> Hans Aberg's reply contained what seem to me two seriously misleading
>> exaggerations:

>> | But if your intent is to be able to use something useful in pure math,
>> | then the first order logic is not of any use: You must use second order
>> | logic in order to be able to capture actual mathematical usage.

>> This is not really true.

> In fact, there are several interesting mechanical proofs
>of concepts that I would consider "pure math" that use only
>first order logic. In particular, Shankar's proof of Goedel's
>incompleteness theorem and the Church-Rosser theorem using
>the Boyer-Moore theorem prover.
>
>http://www.csl.sri.com/users/shankar/goedel-book.html

But then I figure these use the meta-logic used to study the higher-order
axioms like mathematical induction: If you include mathematical induction,
then one ends up with higher order implementation, or possibly a very
screwy first order implementation.

Apart from specialty fields, few working pure mathematicians are willing
to give up mathematical induction, the axiom of choice, and such things.

This then falls into the picture of using the modified Prolog engine as
the basis for a first order meta-logic, which then is used to study the
higher order object logic systems (just as in say Qu-Prolog and Ergo,
then).

Let me quote from the Otter manual <http://www-unix.mcs.anl.gov/AR/otter/>
(Otter is a first order system):
>>>>
What Otter Isn't

Some of the first applications that come to mind when one hears "automated
theorem prover" are number theory, calculus, and plane geometry, because
these are some of the first areas in which students try to prove theorems.
Unfortunately, Otter cannot do much in these areas: interesting number
theory usually require induction, interesting calculus and analysis
problems usually require higher order functions, and the first order
axiomatizations of geometry are not practical. (Nonetheless, Art Quaife
has proved many interesting theorems in number theory and geometry using
Otter.)
<<<<

This captures what my hunch is, and what I wanted to express: Even though
those first order systems may be useful in specialty situations, those are
not to be used to handle general pure math, because even if it sometimes
can be done by some specialty tweaks, it is not going to be practical in
general.

None of the posts here change that general picture.

And when one admits higher order axioms to be added, as well as other
types of axioms with redundancy as common in standard pure math (say just
standard axioms for groups, rings or integers), then the "automated" part
of theorem proving usually quite quickly goes out of the window, because
one ends up in non-terminations and prohibitively long deduction times.

Therefore, I think that for general pure math use, one for now ends up on
a "proof verifier", where the user enters proofs, just like in pure math.

Such a proof verifying system is in fact not more restrictive than first
order systems, because if one indicates a set of first order axioms with
no redundancy to be used when attempting to prove a certain theorem, one
ends up with the same proof verification mechanism as in the first order
systems.

The essentially AI problem of making such a system into good automated
theorem provers is probably quite difficult, as it then has to mimic the
creativity of the human mathematicians. (Unless one believes
mathematicians are incapable of using imagination when discovering new
methods of carrying out proofs. :-) )

tc...@lsa.umich.edu

unread,
Aug 18, 2002, 9:43:57 PM8/18/02
to
In article <remove.haberg-1...@du128-226.ppp.su-anst.tninet.se>,

Hans Aberg <remove...@matematik.su.se> wrote:
>Lets take an example, the induction axiom:
> (1) P(0) and (P(n) => P(n')) => P(x).
> (2) all x: P(0) and (P(n) => P(n')) => P(x).
> (3) all P, all x: P(0) and (P(n) => P(n')) => P(x).
>In all these variations, P will be a variable, no matter how you would
>want to term it in logical "n-order" terms.

I'm not sure I understand your example. A working mathematician is
probably only going to be interested in one specific P at a time, and
is unlikely to want to quantify over all P. True, the parser will have
to be able to recognize any instance of the induction schema, but this
doesn't mean that the syntax itself has to allow quantification over
higher-order objects.

Also, the axiom of choice, which you mentioned in another article, is
part of ZFC, which is a first-order axiomatization of set theory.

John Harrison

unread,
Aug 18, 2002, 9:43:57 PM8/18/02
to
remove...@matematik.su.se (Hans Aberg) wrote in message news:<remove.haberg-1...@du128-226.ppp.su-anst.tninet.se>...

| >| But if your intent is to be able to use something useful in pure math,
| >| then the first order logic is not of any use: You must use second order
| >| logic in order to be able to capture actual mathematical usage.
| >
| >This is not really true. One can perfectly well work using first order
| >axioms for sets, as Mizar does.
|
| This is really a statement from the Otter manual:
| Otter: http://www-unix.mcs.anl.gov/AR/otter/

This is quite a large document, so perhaps you can point me at the exact
quote? Otter, by the way, has been used to do inference in first order
axiomatic set theory. (NBG, which has the advantage over ZFC of being
finitely axiomatized.) See for example:

http://www.math.gatech.edu/~belinfan/research/autoreas/index.html

Of course, the prospect of proving difficult theorems entirely
automatically from this basis is fanciful. But that is true of higher
order logic too with almost the same force. To prove difficult theorems
one is likely to need a mixture of interaction and automation, as
supported by a number of current systems.

| Lets take an example, the induction axiom:
| (1) P(0) and (P(n) => P(n')) => P(x).
| (2) all x: P(0) and (P(n) => P(n')) => P(x).
| (3) all P, all x: P(0) and (P(n) => P(n')) => P(x).
| In all these variations, P will be a variable, no matter how you would
| want to term it in logical "n-order" terms.
|
| First order computer systems do not admit the predicate P to be variable.
| So one will have to introduce a new predicate, say N(P, x) and write say
| (4) N(P, 0) and (N(P, n) => N(P, n')) => N(P, x).

No such ad-hoc solution is necessary. One simply regards P as a set and
writes "n \in P". (Forgive the LaTeX; "\in" is the set membership
predicate.) This is how induction is presented in many textbooks.

| Even if some may think that is workable, one could not say that this is a
| computer implementation adapted to the human needs, as that simply is not
| the style in math.

Not so if you use the set-theoretic formulation. Most contemporary
mathematicians are entirely comfortable with set-theoretic vocabulary,
and if anything need to be discouraged from its gratuitous overuse.

| > Then functions and other "higher-order"
| >objects are simply sets.
|
| Is this formally correct axiomatic set theory then?

Yes, insofar as I can attach any sense to your question. Even without
detailed knowledge of formal logic, you must be familiar with the
notional reduction of most of mathematics to set theory. And it is indeed
the case that most of this can in principle be formalized in a first
order axiom system for sets. For example, see MacLane's book
"Mathematics, Form and Function", p377:

"As to precision, we have now stated an absolute standard of
rigor: A Mathematical proof is rigorous when it is (or could be)
written out in the first-order predicate language $L(\in)$ as a
sequence of inferences from the axioms ZFC, each inference made
according to one of the stated rules. [...] When a proof is in
doubt, its repair is usually just a partial approximation to the
fully formal version."

For more on this, see the paper of mine I originally recommended:

http://www.cl.cam.ac.uk/users/jrh/papers/form-math3.html

| -- The reason I landed on the variation where the Prolog-like system
| serves as the meta-logical deduction system rather than the logical system
| was that I started out with the naive variation of interpreting predicates
| as "sets". Even though conceptually efficient, this does however not lead
| to a formally correct axiomatic set theory, and leads to a number of
| difficulties like when trying interpretating "=" with respect to
| unification.
|
| Thus I changed to the meta-level approach, and it happens this is the way
| Qu-Prolog approaches the problem. So my current feel is that this is the
| correct approach.

Fine. My only objection was to your presenting this as "the approach"
--- the inevitable reaction of a thinking person to Goedelian
incompleteness --- when there are several other apparently quite
successful approaches to automated reasoning in mathematics. The
original poster was presumably hoping for something more objective.

| It is not particularly difficult to add a higher-order system, so I do not
| see why one should not do that, as that is the leads to the style that one
| is used in math.

I fully accept that there are some good reasons for having a
higher-order system. I'm the author of such a system and use it almost
every day. I was merely pointing out that it is far from *essential*,
contrary to your original claim. Again, this seemed particularly
damaging to present, ex cathedra, to a poster looking for balanced
advice.

| Therefore, if the idea is to produce a math system which is guaranteed to
| be logically correct, it seems best to follow the exact definitions of
| formal logic.

Any dissonance between the viewpoint I have been advancing and the
"exact definitions of formal logic" exists only in your imagination.

John.

Hans Aberg

unread,
Aug 19, 2002, 10:19:45 AM8/19/02
to
In article <x3Y79.12185$m7.1...@vixen.cso.uiuc.edu>,
jo...@ichips.intel.com (John Harrison) wrote:

>| This is really a statement from the Otter manual:
>| Otter: http://www-unix.mcs.anl.gov/AR/otter/
>
>This is quite a large document, so perhaps you can point me at the exact
>quote?

I wrote down the quote in another post that have appeared by now. It is
from section 1.1, "What Otter Isn't", so starting from the beginning one
will hit it rather soon. :-)

> Otter, by the way, has been used to do inference in first order
>axiomatic set theory.

...


>Of course, the prospect of proving difficult theorems entirely
>automatically from this basis is fanciful. But that is true of higher
>order logic too with almost the same force. To prove difficult theorems
>one is likely to need a mixture of interaction and automation, as
>supported by a number of current systems.

So this is what I say, what I do when experimenting in my own
implementations, as well as the Otter quote above says: Simpler systems
can be used in special cases, but for general mathematical use, doing more
advanced things, one needs a system where the user can enter interaction
as needed when the automated system fails.

>From this point of view, there is no problem having a system that admits
higher order formulations, as is what mathematicians commonly use.

>| Lets take an example, the induction axiom:
>| (1) P(0) and (P(n) => P(n')) => P(x).
>| (2) all x: P(0) and (P(n) => P(n')) => P(x).
>| (3) all P, all x: P(0) and (P(n) => P(n')) => P(x).
>| In all these variations, P will be a variable, no matter how you would
>| want to term it in logical "n-order" terms.

>| First order computer systems do not admit the predicate P to be variable.
>| So one will have to introduce a new predicate, say N(P, x) and write say
>| (4) N(P, 0) and (N(P, n) => N(P, n')) => N(P, x).

>No such ad-hoc solution is necessary. One simply regards P as a set and
>writes "n \in P".

If one know wants P to be a set and not a predicate:

>This is how induction is presented in many textbooks.

I just look into books by Mendelson, Kleene, Schoenfield, and they all let
P be a predicate (typically a variable of meta-logical as opposed to
variable of the logic studied). (Also see below about
logical/meta-logical/computer definitions of "order".)

>| Even if some may think that is workable, one could not say that this is a
>| computer implementation adapted to the human needs, as that simply is not
>| the style in math.

>Not so if you use the set-theoretic formulation. Most contemporary
>mathematicians are entirely comfortable with set-theoretic vocabulary,
>and if anything need to be discouraged from its gratuitous overuse.

The main point though is that even if one succeeds avoiding formulations
where predicates and meta-logical functions appear as variables, using
axiomatic set theory variables instead, those formulations are not very
convenient in actual math use. -- I elaborated somewhat on this in another
post.

One accepts the underlying axiomatic set theoretic formulations as a basis
for the math, but in reality, by "abuse of notation", one uses the more
convenient higher order formulations. A convenient computer system must be
able to cope with that fact.

> (Forgive the LaTeX; "\in" is the set membership
>predicate.)

Use whatever notation you think is parsable. :-)

>| > Then functions and other "higher-order" objects are simply sets.

>| Is this formally correct axiomatic set theory then?

>Yes, insofar as I can attach any sense to your question. Even without
>detailed knowledge of formal logic, you must be familiar with the
>notional reduction of most of mathematics to set theory. And it is indeed
>the case that most of this can in principle be formalized in a first
>order axiom system for sets. For example, see MacLane's book
>"Mathematics, Form and Function", p377:

When sources like these are speaking about a "first order logic", that
seems to refer to the logic under study. So it means that one does not
have quantifiers on predicates or functions and do not have predicates
with predicate arguments.

However, in the meta-logic, predicates and functions are still allowed to
vary. Therefore the total logic/meta-logical system becomes "higher
order".

This is what matters from the point of view of computer implementation, as
one then has to implement variable matching of such predicate and function
meta-variables.

In first order Prolog type of systems, that is not admitted. So one ends
up on formulations like (4) above.

>| Thus I changed to the meta-level approach, and it happens this is the way
>| Qu-Prolog approaches the problem. So my current feel is that this is the
>| correct approach.

>Fine. My only objection was to your presenting this as "the approach"
>--- the inevitable reaction of a thinking person to Goedelian
>incompleteness --- when there are several other apparently quite
>successful approaches to automated reasoning in mathematics. The
>original poster was presumably hoping for something more objective.

I think you are jumping to conclusions, because I said in my original post
that the implementation chosen will depend on ones objectives:

A system that intend to implement a computer programming language will be
different from one that intends to automate certain types of proofs, and
those will be different from a system that will be able to help
mathematicians to verify their proofs in any general sense. (Even though
that one can think of an integrated system that admits all components.
This sounds a bit dangerous though, if one does not get to know what has
been exactly proved, but instead merely been computed without proof
verification.)

The Godel incompleteness means this: When speaking about "automated
theorem prover", newbies will jump to a conclusion that a system is
possible where one just enter the theorems, and the system always produces
the proof. By the Goedel incompleteness, we know that it is impossible, if
we include the number theoretic axioms.

In reality, the automatic system will hit the problem at a much more basic
level, in the form of axiom redundancy that causes non-terminations
(infinite loops):

It is not decidable (by an algorithm) whether code produces
non-termination, so the limitations of a system to always find a proof
will probably show up on a more basic level. (That, is, the Prolog
machinery executes a proof, and if that language machinery is Turing
equivalent, it is undecidable to tell whether a proof will terminate.)

>| It is not particularly difficult to add a higher-order system, so I do not
>| see why one should not do that, as that is the leads to the style that one
>| is used in math.

>I fully accept that there are some good reasons for having a
>higher-order system. I'm the author of such a system and use it almost
>every day. I was merely pointing out that it is far from *essential*,
>contrary to your original claim. Again, this seemed particularly
>damaging to present, ex cathedra, to a poster looking for balanced
>advice.

As mentioned above, I think you jumped to conclusions: Higher order
formulations are so common in math, that systems that enforces first order
in both logic and meta-logic will not be useful for general use. More
special systems, of course have more special uses. -- You can prove some
theorems using a basic arithmetic desk calculator, is you so like. :-)

As for the advice to the original poster, I never told what I thought was
the universally right thing, as different objectives requires different
implementations. In fact, in a private email, I suggested him to start off
with both an advanced (= logically universal) system like Qu-Prolog and a
simple system like Otter, and then after that decide what to choose.

>| Therefore, if the idea is to produce a math system which is guaranteed to
>| be logically correct, it seems best to follow the exact definitions of
>| formal logic.
>
>Any dissonance between the viewpoint I have been advancing and the
>"exact definitions of formal logic" exists only in your imagination.

Good. I think that one can go very far with those systems that rely on
"naive set theory" and such formulations (as math itself once did):

I started with one system which merely which lets "x in P" be a
reformulation ("syntactic sugar") of "P(x)", and where the Prolog "A :- B"
is written as "B => A". This is very effective, because one can directly
see how to implement mathematical axioms.

But then this led to several complications, and therefore I am now
experimenting with the variation where the Prolog type inference systems
stands for the meta-logic instead. I feel now this is absolutely right.
And as it roughly parallels the historical development of mathematical
meta-logic, I feel sure this is the end of it.

But who knows, perhaps there is something even more advanced? :-=

So summing it up, one can play around with a various types of
implementations, depending on ones objectives.

But for a system that should support absolutely correct mathematical
theorem proof verification, I feel sure that this meta-logical approach is
the correct one. If the system should be of general type, then I think it
must support higher order formulations (predicates and functions as
variables) of the total logical/meta-logical system, no matter how the
system then treats it (say via some syntactic sugar and axiomatic set
theory reducing it to first order non-meta-logic).

Hans Aberg

unread,
Aug 19, 2002, 10:19:45 AM8/19/02
to
In article <x3Y79.12184$m7.1...@vixen.cso.uiuc.edu>, tc...@lsa.umich.edu
wrote:

>In article <remove.haberg-1...@du128-226.ppp.su-anst.tninet.se>,
>Hans Aberg <remove...@matematik.su.se> wrote:
>>Lets take an example, the induction axiom:
>> (1) P(0) and (P(n) => P(n')) => P(x).

>> (2) P(0) and (all n: P(n) => P(n')) => all x: P(x).


>> (3) all P, all x: P(0) and (P(n) => P(n')) => P(x).
>>In all these variations, P will be a variable, no matter how you would
>>want to term it in logical "n-order" terms.

>I'm not sure I understand your example.

...


>, the axiom of choice, which you mentioned in another article, is
>part of ZFC, which is a first-order axiomatization of set theory.

But the axiom of choice is equivalent (also see below) with the
well-ordering principle when restricted to N.

I think that (1) and (2) above might be called "first order" in
meta-mathematical logic because they do not involve quantization over P or
any function and do not have predicates with predicates as arguments.
Whereas (1) and (2) above are logical "first order", the total
logical/meta-logical system one reasons with becomes "higher order" as it
involves a variable P.

But in a computer, one still must implement these axioms as "higher order"
meaning P that becomes a variable to be matched. Typical Prolog type
"first order" programs would not admit such a matching.

It is not difficult to implement (1) to admit matching against the
variable P, which I know because I have done it. -- I experiment with a
translation of the Mini-Prolog that comes with hugs
http://haskell.org/hugs that I translated into C++, adding a Flex/Bison
based parser. Over the weekend, I started adding quantifiers according to
the description in Mendelson, "Introduction to mathematical logic". But
one could probably use the Qu-Prolog system
http://www.svrc.it.uq.edu.au/pages/QuProlog_release.html, if one is not
interested in writing a full system.

So if I can do it with my own private hacking, it ought to be possible to
do it for computer scientist pros which do that stuff all the day.

> A working mathematician is
>probably only going to be interested in one specific P at a time, and
>is unlikely to want to quantify over all P. True, the parser will have
>to be able to recognize any instance of the induction schema, but this
>doesn't mean that the syntax itself has to allow quantification over
>higher-order objects.

I am worried about your term "has to allow": Clearly, one can prove
mathematical theorems by writing directly into computer machine code,
which is what one did in the early days of computing before computer
languages where invented.

The reason one uses computer languages is because it is convenient to the
human user.

Any sufficiently general computer language is going top be Turing
equivalent, and from that point of view, one can do anything that can be
done in any other Turing equivalent Turing language. However, computers,
not having infinite memory, are not Turing machines, and the Turing
equivalence reasoning does not preserves structures with respect to
computer time and space.

So in reality when designing a computer language, one will have to do with
the realities what is efficient, both with respect to code running in the
computer, the time and cost for its implementation and the usefulness for
the users.

Now, if you are saying that the higher order formulations (such as the
induction axiom) that mathematicians use all the time cannot be used that
is not going to be very convenient to the user:

Let's take a simple example, let O_x be the set of germs of the sheaf of a
differential manifold (just the differential functions defined locally at
x module the equivalence of being identical in some neighborhood around
x). Then define the maximal ideal m_x by being the set of all f satisfying
(a) all f in O_x: f(x) = 0.
This is a higher order formulation, because there a quantization over the
function f.

But when reducing to formal axiomatic set theory, (a) is not a higher
order formulation: Let S(y) be the predicate that something is a set, then
rewrite (a) as
(b) all f: S(f) & f in A => f(x) = 0.
Now, f(x) = 0 makes it look as though f is a function is the meta-logical
sense, but f is a set, so this is in reality short for
(c) all x: (x, 0) in f.

Now, one only has to pick all the pieces together via axiomatic set
theory, in order to arrive at a truly first order formulation.

I feel sure that mathematicians when seeing this would acknowledge that
this is how axiomatic set theory is working, that it is hiding away the
fact that the seemingly higher order formulations in reality are just
first order set theoretic formulations.

But as for a computer system, they would not want an implementation
forcing them to do all that rewriting all the time, but instead a system
that, like in Bourbaki, by "abuse of notation" admits one write down what
is intuitively correct. In the example above, the intuitively correct
thing is to take the functions f such that f(x) = 0, not the underlying
axiomatic set theory formulation.

So one ends up on a higher order implementation for general, efficient
mathematical use, no matter how one wants to reduce it to an underlying
first order machinery via some syntactic sugar.

John Harrison

unread,
Aug 21, 2002, 9:57:21 AM8/21/02
to
I think we are in danger of repeating ourselves. Let me make one final
attempt to state clearly what seems to me a rather straightforward point.
Your original statement was:

| But if your intent is to be able to use something useful in pure math,
| then the first order logic is not of any use: You must use second order
| logic in order to be able to capture actual mathematical usage.

The facts are as follows.

1. There is a well-established reduction of most mathematics to
first order axiomatic set theory, and this in fact is the most
widely accepted "foundation" for mathematics.

2. The possibility of performing such a reduction, at least in
principle, is often cited as the ultimate test for mathematical
rigour, for example by such authorities as Bourbaki.

3. Higher order logic itself can be given a semantics in first
order set theory, and very straightforwardly. If you wish, I can
supply some references.

4. There are several computer theorem proving systems that
actually realize a reduction of mathematics to first order set
theory, notably Mizar which has been relatively successful.

Your original statement is therefore false.

I don't dispute for a moment that there are serious issues of
practicality, convenience etc. in actually formalizing mathematics in
first order set theory, still more in automating it completely. But these
difficulties do not miraculously disappear when using second order logic,
so have little relevance to your original claim.

Since I gather from your other posts that you are a competent
mathematician, I find it difficult to understand your problem with this
line of argument. Perhaps, despite your professed enthusiasm for the
"exact definitions of formal logic", you are just unfamiliar with the
usual meanings of "first order logic" and "second order logic". For

example, in another post you write:

| But in a computer, one still must implement these axioms as "higher
| order" meaning P that becomes a variable to be matched. Typical
| Prolog type "first order" programs would not admit such a matching.

Yet whether something is a "variable to be matched" in a computer
implementation is a quite orthogonal concept to whether it is
higher-order in the logical sense. Elsewhere in that post you say:

| Let's take a simple example, let O_x be the set of germs of the sheaf of a
| differential manifold (just the differential functions defined locally at
| x module the equivalence of being identical in some neighborhood around
| x). Then define the maximal ideal m_x by being the set of all f
| satisfying
| (a) all f in O_x: f(x) = 0.
| This is a higher order formulation, because there a quantization over the
| function f.

But in the set-theoretic formulation, a function is just a particular kind
of set (a special kind of binary relation or whatever), and so logically,
in the axiomatic theory, it is just another first order object. I thought
by now this point had been made ad nauseam.

Let me recommend Shapiro's book "Foundations without Foundationalism" as a
readable and mostly elementary introduction to the use of higher-order
logic in mathematics. You may like the fact that it is to some extent a
polemic in favour of higher order logic --- I did when I first read it.
But Shapiro gives a careful discussion of technical subtleties, and is
certainly too sensitive to make such a naively categorical claim as the
one above, which I have already devoted too much time to refuting.

John.

Hans Aberg

unread,
Aug 22, 2002, 11:27:21 AM8/22/02
to
In article <5%M89.12781$m7.1...@vixen.cso.uiuc.edu>,
jo...@ichips.intel.com (John Harrison) wrote:

>I think we are in danger of repeating ourselves.

I think so; but it may be important to clarify these things. :-)

>| But if your intent is to be able to use something useful in pure math,
>| then the first order logic is not of any use: You must use second order
>| logic in order to be able to capture actual mathematical usage.
>
>The facts are as follows.
>
> 1. There is a well-established reduction of most mathematics to
> first order axiomatic set theory, and this in fact is the most
> widely accepted "foundation" for mathematics.

When I made that first statement, I did not think of the reduction in
terms of axiomatic set theory to a first order theory (a slip of the mind;
sorry).

However, in math, it is common to distinguish between math that is
constructed within set theory and which is not, for example when defining
categories:

When defining categories, one first defines purely logical
"metacategories". Then, if that one is constructible within axiomatic set
theory, it is called a "category" or sometimes a "small category". See for
example MacLane, "Cat's for the Working Math".

Thus it is false that all math is constructible within axiomatic set
theory, even though one probably pretty fast lands on that in actual
working math.

> 2. The possibility of performing such a reduction, at least in
> principle, is often cited as the ultimate test for mathematical
> rigour, for example by such authorities as Bourbaki.

So this seems to be a strange statement. One needs to ensure consistency
of the axiomatic system used, but that is all. If there is such a
consistent axiomatic system one can reduce to, that clearly simplifies the
proof-checking, which is why it is used.

> 3. Higher order logic itself can be given a semantics in first
> order set theory, and very straightforwardly. If you wish, I can
> supply some references.

This might be interesting. My books have references to work of Church,
Goedel, Tarski, Scholtz-Hasenjaeger.

> 4. There are several computer theorem proving systems that
> actually realize a reduction of mathematics to first order set
> theory, notably Mizar which has been relatively successful.
>
>Your original statement is therefore false.

So how do they then handle the reduction of a metagraphs and
metacategories and other mathematical theories not based on axiomatic set
theory?

>I don't dispute for a moment that there are serious issues of
>practicality, convenience etc. in actually formalizing mathematics in
>first order set theory, still more in automating it completely. But these
>difficulties do not miraculously disappear when using second order logic,
>so have little relevance to your original claim.

I merely notice that it does not seem difficult to implement a higher
order system in a way that it does not interfere even if one now is
interested in reducing all math to first order math.

The higher order constructs come in on the metalevel, for example for
axiom `modus ponens'.
variable P, Q. P, P => Q |- Q.
If one admits that, one will of course end up with a great deal of
redundancy, which requires more control.

Othewise, the main thing to a pure mathematician, I figure, is that the
axioms, theorems and proofs can be plugged pretty much as is into the
computer without extensive rewriting, and that the computer verifies that
it is logically correct.

>Since I gather from your other posts that you are a competent
>mathematician, I find it difficult to understand your problem with this
>line of argument. Perhaps, despite your professed enthusiasm for the
>"exact definitions of formal logic", you are just unfamiliar with the
>usual meanings of "first order logic" and "second order logic".

Yes, I have no idea of what you mean by this, despite that I repeatedly
asked for you defining it. When I looked into some computer science
papers, they very explicitly noted that they have definitions that are
different from that of formal logic. See for example
http://www.cse.psu.edu/~dale/papers/Handbook_Logic_AI_LP.pdf
http://www.cse.psu.edu/~dale/papers/amast02.pdf

I also gave an example:

Induction axiom (Mendelson, p.103): For any well-formed formula P(x) in
the first order theory S:
(1) P(0) and (all x: P(x) => P(x')) => (all x: P(x)).

So here it is clear that S is a first order axiomatic theory.

But when I implement it into a computer, I need to treat P as a variable,
and that is not supported in the first order Prolog systems: If I do that
extension, then the Prolog system is called higher order. See for example
Sterling & Shapiro, "The Art of Prolog", ch. 16, which includes info both
on how to reduce to first order Prolog systems as well info about higher
order implementations.

Back to math then, then P is a metalogical variable that ranges over
well-formed formulas, and in analogy with the computer example, that seems
logical to call it a higher order (meta-)system.

> For
>example, in another post you write:

>| But in a computer, one still must implement these axioms as "higher
>| order" meaning P that becomes a variable to be matched. Typical
>| Prolog type "first order" programs would not admit such a matching.
>
>Yet whether something is a "variable to be matched" in a computer
>implementation is a quite orthogonal concept to whether it is
>higher-order in the logical sense. Elsewhere in that post you say:

So to me it looks as though there is a 1-1 correspondence: When I made
that implementation, I merely extended unification so that instead of only
being able to unify expressions P(x_1, ..., x_n) where P is a constant, it
now treats P as a variable and is doing unification on that variable too.
(It becomes more complex for bound variables.)

>| x). Then define the maximal ideal m_x by being the set of all f
>| satisfying
>| (a) all f in O_x: f(x) = 0.
>| This is a higher order formulation, because there a quantization over the
>| function f.
>
>But in the set-theoretic formulation, a function is just a particular kind
>of set (a special kind of binary relation or whatever), and so logically,
>in the axiomatic theory, it is just another first order object. I thought
>by now this point had been made ad nauseam.

As it stands, (a) is a second order _formulation_, as it consists of a
function used as a variable. Via axiomatic set theory, it is possible to
reduce it to a first order theory in the sense of formal logic, but that
is not what mathematicians normally would use, as the higher order
formulation communicates the right intent.

If you occasionally use the axiomatic set theory first order formulation
in cases where the true logic needs to be brought out, then mathematicians
would not reflect much over that either, since one knows that this is how
the logical foundations of modern mathematics are being built up.

But if you insist to use only the axiomatic set theory formulations
massively all the time, like prohibiting to write f(x), then the math will
quickly become unreadable as it no longer communicates the intended
structures. This is no stranger that it is hard for humans to understand a
computer program written directly in machine code.

Therefore, if you want to have a system that should be usable to
mathematicians, it should admit higher order formulations like (a), even
though the computer then internally reduces it to a first order formal
logic theory internally.

So I figure that the computer program that checks the math must be
declarative so that f(x) can be used side by side in both the cases it has
a set theoretic definition (f: A -> B is a subset of A x B with some
special properties) and when it is used as a function in the sense of
formal logic.

>Let me recommend Shapiro's book "Foundations without Foundationalism" as a
>readable and mostly elementary introduction to the use of higher-order
>logic in mathematics.

Thank you for the references you give, because even though I may not
comment on them, I save them for future use.

> You may like the fact that it is to some extent a
>polemic in favour of higher order logic --- I did when I first read it.

>But Shapiro gives a careful discussion of technical subtleties,.

As for my own implementations, it suffices to me to follow the books of
Mendelson and Kleene, which describe first order logical theories, but
which are implemented via a so called second order Prolog machinery.

The line I follow right now is to implement this logic much closer with
respect to types (variables, predicates, logical operations, functions,
terms, etc.) than in a typical essentially untyped Prolog system.

Statements like `reductio ad absurdum':
variables P, Q. (not P |- Q, not P |- not Q) |- P.
do not seem to have any immediate counterpart in traditional Prolog
systems. So, in order to handle that, I think of admitting the clauses to
have subclauses.

So silly little details like that keep me busy for now. (And I am just
scratching the surface to help keep me informed, with no intent of
building a full system.)

>and is
>certainly too sensitive to make such a naively categorical claim as the
>one above, which I have already devoted too much time to refuting

If you should actually attain the refuting, I think you will have to work
a bit harder. :-)

Please also remember that no one has yet been able to write down axioms
for personal preferences. So even if you can demonstrate to mathematicians
that all their mathematics can be formally reduced to a first order formal
logic system, they still may, and probably will, prefer to use higher
order formulations (as in the example (a) above).

Jim Nastos

unread,
Aug 22, 2002, 11:27:26 AM8/22/02
to
On Wed, 21 Aug 2002, John Harrison wrote:

> 3. Higher order logic itself can be given a semantics in first
> order set theory, and very straightforwardly. If you wish, I can
> supply some references.

What exactly do you mean here? Yes, provide some of these references you
mention. It is well known that a simple property of graphs (that being,
the property of vertex x and vertex y are connected in a general
graph) is NOT expressible in the standard first order graph theoretic
language, whereas is is expressible in (existential) 2nd order logic.

And for the record...

> 4. There are several computer theorem proving systems that
> actually realize a reduction of mathematics to first order set
> theory, notably Mizar which has been relatively successful.
>
> Your original statement is therefore false.

No, Mizar is a proof *checker.* Not a proof finder.

--
Jim Nastos, B.Math, B.Ed | Office: 117 Athabasca Hall
MSc Candidate | Office Phone: (780) 492-5046
University Of Alberta | Edmonton, Alberta
Department of Computing Science | T6G 2H1
nas...@cs.ualberta.ca |
http://www.cs.ualberta.ca/people/grad/nastos.html

tc...@lsa.umich.edu

unread,
Aug 22, 2002, 6:24:21 PM8/22/02
to
In article <yp799.13104$m7.1...@vixen.cso.uiuc.edu>,

Jim Nastos <nas...@cs.ualberta.ca> wrote:
>On Wed, 21 Aug 2002, John Harrison wrote:
>
>> 3. Higher order logic itself can be given a semantics in first
>> order set theory, and very straightforwardly. If you wish, I can
>> supply some references.
>
> What exactly do you mean here? Yes, provide some of these references you
>mention. It is well known that a simple property of graphs (that being,
>the property of vertex x and vertex y are connected in a general
>graph) is NOT expressible in the standard first order graph theoretic
>language, whereas is is expressible in (existential) 2nd order logic.

First-order graph theory isn't the same as first-order set theory.
I don't think John Harrison is saying much more than the simple fact
that essentially all mathematics can be "reduced" to first-order
axiomatic set theory, e.g., ZFC. A graph can be thought of as an
ordered pair (V, E), where V is the vertex set and E is the edge set.
Ordered pairs can be thought of as sets: (x,y) = {{x},{x,y}}. For
x and y to be connected there just has to exist a sequence of edges
connecting them, and a sequence can also be thought of as a set (by
iterating the ordered pair construction), etc. Basically, when you
allow yourself to quantify over sets, you are allowing yourself to
quantify over practically anything.

Tom Leinster

unread,
Aug 22, 2002, 8:45:48 PM8/22/02
to
tc...@lsa.umich.edu writes:

> First-order graph theory isn't the same as first-order set theory.
> I don't think John Harrison is saying much more than the simple fact
> that essentially all mathematics can be "reduced" to first-order
> axiomatic set theory, e.g., ZFC. A graph can be thought of as an
> ordered pair (V, E), where V is the vertex set and E is the edge set.
> Ordered pairs can be thought of as sets: (x,y) = {{x},{x,y}}. For
> x and y to be connected there just has to exist a sequence of edges
> connecting them, and a sequence can also be thought of as a set (by
> iterating the ordered pair construction), etc. Basically, when you
> allow yourself to quantify over sets, you are allowing yourself to
> quantify over practically anything.

Agreed, mostly: that was also my impression of what John Harrison was
saying and I don't have any argument with it.

But - and this may be the cause of the dispute - there's a big difference
between "reducing" mathematics to first-order axiomatic set theory and
*expressing* mathematics as set theory. It seems to me that the language
actually used by mathematicians is a long way from being the language of
set theory, and I don't just mean the level of formality. To take an
extreme example: if you "reduce" to ZFC then the statement

- Every non-constant complex polynomial has at least one complex root

makes no more or less sense than the statement

- Every non-constant complex polynomial is the intersection of the 4th
symmetric group S_4 with some diagonal matrix.

Of course the second statement is absurd to human beings. We judge it
absurd because implicitly we all assign types to the entities we use: if G
is of the type "group" and M of the type "matrix" then you never consider
the intersection of G and M, as the types are all wrong. So if I wanted to
write a program that, say, generated theorems and their proofs in the realm
of algebra, then I'd make sure that it couldn't even express the second
statement above; I'd choose some well-typed language more like our human
mathematical language to eliminate the possibility of such junk.

So some parts of the reduction of mathematics to set theory are regarded by
all as pure artifice. Some are more ambiguous. Take equivalence
relations. In the usual coding of mathematics as set theory, an
equivalence relation on a set S is regarded as a subset of S x S.
Sometimes this has real content: for instance, if ~ is an equivalence
relation on a topological space S then S/~ is Hausdorff if and only if ~ is
closed when regarded as a subset of S x S. On the other hand, if someone
started talking to you about "the cardinality of an equivalence relation"
then you'd probably want to ask them what they meant, which indicates that
the associated subset isn't quite regarded as what the equivalence relation
fundamentally *is*.

So my reservation about set theory as an *expressive* tool is not so much
that it can't express enough, but that it expresses way too much. People
type their variables; set theory doesn't. To misappropriate Weyl's words
slightly, it's "too full of sand".

Of course, none of what I've said affects the argument about the
*reduction* of mathematics...

Tom

Hans Aberg

unread,
Aug 23, 2002, 7:02:11 AM8/23/02
to
In article <ak3oal$nq0$1...@galois.mit.edu>, tc...@lsa.umich.edu wrote:

>I don't think John Harrison is saying much more than the simple fact
>that essentially all mathematics can be "reduced" to first-order
>axiomatic set theory, e.g., ZFC.

Not "essentially all math", but most:

For example, when speaking about categories, MacLane assumes the whole
category to be a set. I think others may call that a small category, only
requiring the Hom(A, B) sets to be sets. MacLane suggests that one chooses
a universe U which is a sufficiently large set into which all other sets
can be fit. However, then the theory will depend on this universe:

In axiomatic set theory, the property of being a set is itself not a set,
but a predicate; see for example, Mendelson, p. 160, where X being a set
M(X) is the property
M(X) := exists Y: X in Y.

Further down, Mendelson defines a pair (X, Y) := {{X}, {X, Y}} when both
X, Y are sets; when they are not both sets, (X, Y) := empty set. Thus it
collapses, if one happens to plug in non-sets.

The problem when trying to plug this stuff into a computer is that one can
say to humans that "this is about realizable by set theory; thus we do not
have to bother about the axiomatic set theoretic details", because
axiomatic set theory is constructed in order to support that intuition.

But one cannot say that to a computer: It has to be exactly right. Also,
the intuitive reasoning above is often the cause of proof errors. So when
one now has computers that can check it exactly, one should try to do so
in order help up the part that humans usually are not so good at.

>Basically, when you
>allow yourself to quantify over sets, you are allowing yourself to
>quantify over practically anything.

So this is very far from being true: For example, Mendelson is very
careful in telling when quantizing over sets (lowercase letters) and when
the variables do not have such restrictions (uppercase letters).

In working math, one quite often allows oneself to be sloppy about that
distinction, because one counts on that the essential math will be
realizable within axiomatic set theory.

But again, one cannot allow such sloppiness when plugging the stuff into a
computer: It can't use an intuitive reasoning to provide a judgment
whether a proof is correct.

Hans Aberg

unread,
Aug 23, 2002, 7:02:20 AM8/23/02
to
In article <m3r8gqq...@leinster.joh.cam.ac.uk>, Tom Leinster
<lein...@dpmms.cam.ac.REMOVETHISBIT.uk> wrote:

> - Every non-constant complex polynomial is the intersection of the 4th
> symmetric group S_4 with some diagonal matrix.

>Of course the second statement is absurd to human beings. We judge it
>absurd because implicitly we all assign types to the entities we use: if G
>is of the type "group" and M of the type "matrix" then you never consider
>the intersection of G and M, as the types are all wrong. So if I wanted to
>write a program that, say, generated theorems and their proofs in the realm
>of algebra, then I'd make sure that it couldn't even express the second
>statement above; I'd choose some well-typed language more like our human
>mathematical language to eliminate the possibility of such junk.

This touches on a point that I also wanted to point out:

Even though that one would agree that say the set of natural numbers can
be constructed within axiomatic set theory as 0 := empty set, 1 := {0},
..., n+1 := {0, ..., n}, there is no agreement that N should be
constructed in this way: If one works with N in number theory, then one
will work with properties that do not depend on such a construction,
except in the sense that it shows the existence of N.

So again, the theory one is in actuality is using is not the axiomatic set
theory where the only atomic object is the empty set, but one which has a
series of atomic objects augmented.

When plugging this stuff into a computer one must decide whether to reduce
all the stuff to axiomatic theory, or support the way mathematicians
actually work:

In computer languages, the way that mathematicians in actuality work shows
up in the form of type theory, and it is considered as very important
because it helps humans to write correct code.

So a math system, I figure, one need to have the same capabilities in
order to be practically usable.

Vidhyanath Rao

unread,
Aug 24, 2002, 9:40:19 PM8/24/02
to

"Hans Aberg" <remove...@matematik.su.se> wrote in message
news:remove.haberg-2...@du130-226.ppp.su-anst.tninet.se...

> In article <ak3oal$nq0$1...@galois.mit.edu>, tc...@lsa.umich.edu wrote:
>
> >I don't think John Harrison is saying much more than the simple fact
> >that essentially all mathematics can be "reduced" to first-order
> >axiomatic set theory, e.g., ZFC.
>
> Not "essentially all math", but most:

I would contest the "most": The areas that use the language/techniques
of applied homological/homotopical algebra account for a noticeable
fraction of current activity.


> In axiomatic set theory, the property of being a set is itself not
> a set, but a predicate; see for example, Mendelson, p. 160, where X
> being a set
> M(X) is the property
> M(X) := exists Y: X in Y.
> Further down, Mendelson defines a pair (X, Y) := {{X}, {X, Y}}
> when both X, Y are sets; when they are not both sets,
>(X, Y) := empty set.

This is only for vNGB or (Quine)-Morse-Kelly versions. In Ackermann's
set theory for example, M is a primitive and we have ZF for classes
(at the cost of weakening, compared to Morse-Kelly, the replacement
axiom). In particular, if X and Y are classes, (X, Y) is also a class
and not empty.

It should be noted that quite a few papers, in algebraic geometry
especially, assume that we can do such things with classes. This is the
whole point of universes as mentioned by MacLane. Now, assuming
arbitrarily large Grothendieck universes is essentially same as
Ackermann's set theory + replacement axiom of vNGB/MK. If we use the
latter, the dependence on the universe is seen to be only apparent.

No matter what we chose, applied homological/homotopical algebra
requires a theory of sets and classes. And it requires that we allow
quantification over classes (which makes vNGB awkward to use). It is
likely that all known results can be proved in a conservative extension
(ie, a theorem that does not refer to classes is true in ZFC if and only
if it is true in the extended language) of ZFC. That does not mean that
they can be stated in ZFC.


Hans Aberg

unread,
Aug 25, 2002, 7:24:25 AM8/25/02
to
In article <ak9cr2$3mp$1...@charm.magnus.acs.ohio-state.edu>, "Vidhyanath
Rao" <nath...@osu.edu> wrote:

>> >I don't think John Harrison is saying much more than the simple fact
>> >that essentially all mathematics can be "reduced" to first-order
>> >axiomatic set theory, e.g., ZFC.
>>
>> Not "essentially all math", but most:
>
>I would contest the "most": The areas that use the language/techniques
>of applied homological/homotopical algebra account for a noticeable
>fraction of current activity.

The main point for this discussion is of course that one cannot expect all
math to be expressible within set theory: That approach is called "naive
set theory".

It is also silly trying to quantify the amount of uses by the word "most"
that I used: What I meant is that one usually expects at some point there
be an application of the theory, and then one expects that application to
be constructible within axiomatic set theory (but exactly how is usually
left unspecified).

>> In axiomatic set theory, the property of being a set is itself not
>> a set, but a predicate; see for example, Mendelson, p. 160, where X
>> being a set
>> M(X) is the property
>> M(X) := exists Y: X in Y.
>> Further down, Mendelson defines a pair (X, Y) := {{X}, {X, Y}}
>> when both X, Y are sets; when they are not both sets,
>>(X, Y) := empty set.
>
>This is only for vNGB or (Quine)-Morse-Kelly versions. In Ackermann's
>set theory for example, M is a primitive and we have ZF for classes
>(at the cost of weakening, compared to Morse-Kelly, the replacement
>axiom). In particular, if X and Y are classes, (X, Y) is also a class
>and not empty.

Do you have a reference (like a good book or a URL)? -- The thing is that
if one should plug it into a computer, it is probably best to let M be a
primitive. So I started to think about that approach for such reasons.

>It should be noted that quite a few papers, in algebraic geometry
>especially, assume that we can do such things with classes. This is the
>whole point of universes as mentioned by MacLane.

I am not sure what you mean here: This would be required for the
description of metacategories if one interprets products A x B literally;
but I think that one might use predicates to express that, in case that
would not be needed. -- Perhaps this is what Ackermann's set theory does?

> Now, assuming
>arbitrarily large Grothendieck universes is essentially same as
>Ackermann's set theory + replacement axiom of vNGB/MK. If we use the
>latter, the dependence on the universe is seen to be only apparent.

Meaning exactly what? As the universe U is enlarged, the category of sets
Set_U will also enlarge. So there must be some stability property making
your statement specific.

>No matter what we chose, applied homological/homotopical algebra
>requires a theory of sets and classes. And it requires that we allow
>quantification over classes (which makes vNGB awkward to use). It is
>likely that all known results can be proved in a conservative extension
>(ie, a theorem that does not refer to classes is true in ZFC if and only
>if it is true in the extended language) of ZFC. That does not mean that
>they can be stated in ZFC.

One approach I have heard of is that one may attempt to replace the large
categories one encounters with an equivalent small one. For example, the
category of modules over a ring R can be constructed as the quotinet of
R^(I), where I is an index set.

Anyway, in the context of the current discussion, one cannot assume the
naive set theory approach that "all objects are sets", because even though
one only is carrying out computations on objects constructible within
axiomatic set theory, one may rely on a theory which does not assume that.

Timothy Y Chow

unread,
Aug 25, 2002, 2:05:52 PM8/25/02
to
In article <remove.haberg-2...@du131-226.ppp.su-anst.tninet.se>,

Hans Aberg <remove...@matematik.su.se> wrote:
>In article <ak9cr2$3mp$1...@charm.magnus.acs.ohio-state.edu>, "Vidhyanath
>Rao" <nath...@osu.edu> wrote:
>>I would contest the "most": The areas that use the language/techniques
>>of applied homological/homotopical algebra account for a noticeable
>>fraction of current activity.
>
>The main point for this discussion is of course that one cannot expect all
>math to be expressible within set theory: That approach is called "naive
>set theory".

Is the source of all the controversy in this thread simply the conflation
of theoretical and practical? From a practical point of view, the reduction
of all math to set theory is cumbersome and there are times when it does
not make sense to do so. I don't think anybody contests this.

But the statements I've quoted above (admittedly taken out of context)
appear to go beyond such a claim, arguing that significant parts of math
cannot be reduced to set theory (say, ZFC), even *in principle*. Is that
what is being claimed? If not, then I don't have anything further to say.

But if so, then I strongly disagree with both claims. For example, having
to quantify over proper classes isn't a problem. One simply defines a
CLASS to be a set having certain properties and a SET to be a set having
certain properties. Note carefully the distinction between "SET"
and "set"; lowercase here denotes the axiomatic set theory that I am
reducing everything else to, and "SET" is a defined concept like "GROUP"
or "VARIETY" or whatever. Then one can quantify over CLASSes using
first-order set theory (*not* SET theory of course).

Expressing all math within set theory isn't necessarily "naive." "Naive"
usually means that certain difficulties with set-theoretic paradoxes are
swept under the rug. But ZFC avoids all known paradoxes.

The only reason I put the qualifier "essentially" in my phrase "essentially
all math" was to sidestep debates about whether (for example) non-rigorous,
"Wittenesque" arguments count as mathematics. But all the standard areas
of math whose rigor is beyond question can be handled by ZFC.

Hans Aberg

unread,
Aug 25, 2002, 7:16:20 PM8/25/02
to
In article <3d691c80$0$3928$b45e...@senator-bedfellow.mit.edu>,

tc...@mit.edu (Timothy Y Chow) wrote:
>Is the source of all the controversy in this thread simply the conflation
>of theoretical and practical? From a practical point of view, the reduction
>of all math to set theory is cumbersome and there are times when it does
>not make sense to do so. I don't think anybody contests this.

From the practical point of view, unless you can explicitly describe how
to reduce the math to whatever axioms, set theoretic or not, you will not
be able to put it into a computer doing proof checking.

>But the statements I've quoted above (admittedly taken out of context)
>appear to go beyond such a claim, arguing that significant parts of math
>cannot be reduced to set theory (say, ZFC), even *in principle*. Is that
>what is being claimed? If not, then I don't have anything further to say.

Category theory and homological algebra and such may be developed like
that, so it cannot be reduced to axiomatic set theory. If you want to
reduce it to axiomatic set theory, you may end up on the equation of
having to do the work all on your own. :-)

>But if so, then I strongly disagree with both claims. For example, having
>to quantify over proper classes isn't a problem. One simply defines a
>CLASS to be a set having certain properties and a SET to be a set having
>certain properties. Note carefully the distinction between "SET"
>and "set"; lowercase here denotes the axiomatic set theory that I am
>reducing everything else to, and "SET" is a defined concept like "GROUP"
>or "VARIETY" or whatever. Then one can quantify over CLASSes using
>first-order set theory (*not* SET theory of course).

I am not sure what you are saying here: Are you proposing inventing a new
theory, redefining commonly used mathematical terms, where predicates are
called "sets" so that one by that can claim all is sets ("proof by
definition")? -- I think most mathematicians would find that very
confusing.

Otherwise, the strategy you describe is essentially the naive set theory
approach and does not work, which is why axiomatic set theory was
developed:

One has to depart of from certain logical properties, or predicates, and
describe when these can be logically constructed as sets without
contradictions. This is what axiomatic set theory does.

In particular, the property of being a set is not a set in axiomatic set
theory, but a predicate.

>Expressing all math within set theory isn't necessarily "naive." "Naive"
>usually means that certain difficulties with set-theoretic paradoxes are
>swept under the rug. But ZFC avoids all known paradoxes.

The "naive set theory" as referring to non-axiomatic set theory is pretty
much a historic term; so if you do not like it, tough luck. :-)

The paradoxes are caused by the view that "everything are sets", the naive
set theory approach. For example, the resolution of the Russell paradox in
axiomatic set theory consists of recognizing that certain "properties" (or
"classes") are not sets.

>The only reason I put the qualifier "essentially" in my phrase "essentially
>all math" was to sidestep debates about whether (for example) non-rigorous,
>"Wittenesque" arguments count as mathematics. But all the standard areas
>of math whose rigor is beyond question can be handled by ZFC.

So how do your category and homology theories look like? And what about
axiomatic set theory, is that also developed within axiomatic set theory,
if now one should put the axioms into a computer and verify the proof of
consistency? Which axioms do you propose for your universal axiomatic set
theory, if that now should be hardwired into a computer proof checking
system?

Also which math is entirely beyond questions of rigor? -- When one started
to plug hand-computed formulas into symbolic algebra, I think that
something like 10% were found to be faulty. There are probably a lot of
errors out there in the proofs. The question is how much that is not
recoverable: If math is used more, one will recheck the errors and
eventually correct them.

It is important to always question the rigor of the math one encounters,
so that it can be corrected at need.

tc...@lsa.umich.edu

unread,
Aug 26, 2002, 1:50:51 PM8/26/02
to
[Cross-posted to sci.logic, where I hope someone can supply the reference
that I ask for below.]

In article <remove.haberg-2...@du128-226.ppp.su-anst.tninet.se>,


Hans Aberg <remove...@matematik.su.se> wrote:
>I am not sure what you are saying here: Are you proposing inventing a new
>theory, redefining commonly used mathematical terms, where predicates are
>called "sets" so that one by that can claim all is sets ("proof by
>definition")?

If you want to put it that way, yes. What I outlined was completely
standard. Most textbooks on foundations discuss something like this,
e.g., Ebbinghaus, Flum and Thomas's _Mathematical_Logic_, or Kunen's
_Set_Theory_ (maybe not the best references but the two I can think
of off the top of my head).

>I think most mathematicians would find that very confusing.

I agree fully. But as I emphasized before and re-emphasize now, I am
talking about what can be done *in principle* and not what would be
useful *in practice*.

>So how do your category and homology theories look like?

I already started sketching this; since the complete details are very
tedious, I won't try to give them all here. Perhaps someone else can
give a reference; I'm not thoroughly familiar with the literature, but
I assume some people have worked out the details of how to reduce set
theory and category theory to each other so that you can pick whichever
of the two foundations catches your fancy.

>And what about
>axiomatic set theory, is that also developed within axiomatic set theory,

You can if you want, yes.

>if now one should put the axioms into a computer and verify the proof of
>consistency? Which axioms do you propose for your universal axiomatic set
>theory, if that now should be hardwired into a computer proof checking
>system?

ZFC, for example.

>Also which math is entirely beyond questions of rigor? -- When one started
>to plug hand-computed formulas into symbolic algebra, I think that
>something like 10% were found to be faulty.

An interesting question, but in my opinion tangential. I used the adverb
"essentially" in order to head off things like this, which I don't think
are directly relevant to the main question under discussion.

Vidhyanath Rao

unread,
Aug 26, 2002, 2:36:38 PM8/26/02
to

"Timothy Y Chow" <tc...@mit.edu> wrote in message
news:3d691c80$0$3928$b45e...@senator-bedfellow.mit.edu...

Perhaps the problem is differing meaning we attach to words: By a set
theory, I mean a theory in the language with a single undefined binary
predicate \in. A theory of sets and classes has an additional
distinction between classes and sets. ZFC is a specific axiom system in
the former language, to be distinguished from other set theories with
stronger axioms. See below for its drawbacks. vNGB, Morse-Kelly have set
as a defined concept but suffer from the drawback such as the inability
to apply theorems about small 2-categories to "small diagrams" of large
categories and functors.

> [...] One simply defines a


> CLASS to be a set having certain properties and a SET to be a set
having
> certain properties. Note carefully the distinction between "SET"
> and "set"; lowercase here denotes the axiomatic set theory that I am
> reducing everything else to, and "SET" is a defined concept like
"GROUP"
> or "VARIETY" or whatever. Then one can quantify over CLASSes using
> first-order set theory (*not* SET theory of course).

> [...] But all the standard areas


> of math whose rigor is beyond question can be handled by ZFC.

The problem is that for standard arguments in algebraic topology to
work, the category of SETs must have all limits/colimits of lots of
functors from SMALL categories: In particular, I should be able to state
the theorems and more or less mechanically translate the proofs from,
say, Ravenal's two books on stable homotopy and all its prerequisites,
and Dror-Farjoun's monograph on localization. [There is more, such as
the recent work on convenient categories for stable homotopy, and
Hirschhorn's monograph in progress, but they can wait.] I would like to
know how to produce a category of SETs with such properties within ZFC.
[You can add stronger axioms, but then it is not ZFC.]

John Harrison

unread,
Aug 26, 2002, 2:56:45 PM8/26/02
to
Please note that I only set out to refute the assertion that first order

logic is "not of any use":

| But if your intent is to be able to use something useful in pure math,
| then the first order logic is not of any use: You must use second order
| logic in order to be able to capture actual mathematical usage.

My refutation was based on the rather well-known "reduction" of most
mathematics to first-order set theory. You and some other contributors to
this thread have brought up various questions such as

* the difficulty of formalizing category theory

* the representational unfaithfulness of formal "reductions"

* the infeasibility of fully automatic formal proofs

While interesting, these lend no obvious support to your original
statement, since they all have roughly the same force whether one is
considering first-order or higher-order formalizations. As an example, I
will briefly address the formalization of category theory below, but let
me first focus on trying once again to clear up your original confusion.

| >Perhaps, despite your professed enthusiasm for the
| >"exact definitions of formal logic", you are just unfamiliar with the
| >usual meanings of "first order logic" and "second order logic".
|
| Yes, I have no idea of what you mean by this, despite that I repeatedly
| asked for you defining it.

I'm afraid I failed to notice these repeated requests, maybe because
ploughing through the various peripheral issues of the sort identified
above was too tedious for me. But perhaps your original claim is to be
construed as a request for such definitions? (Rather as a failed suicide
bid is often held to be a plea for help.)

The book by Shapiro that I've already recommended ("Foundations without
Foundationalism") is a good place to find a discussion of this topic. The
distinction between first-order and higher-order logic is a little
subtler than it might appear, actually. But in order to understand the
refutation of your original point, you only need to know what first-order
logic is, and since you have already mentioned one or two standard texts,
I assume that you already know that.

As far as I can see, your underlying error is to ascribe the description
"higher-order" to a formalization when it involves quantification over
what in the *informal* mathematical conception are "higher order"
(functions and relations). However, the whole point I was making is that
in the usual set-theoretic reduction, these are all just sets, and so
such quantification is admissible in first-order logic. Therefore,
first-order logic is a quite satisfactory vehicle for the formalization
of mathematics, and is not "of no use". That's all. (Analogously,
if one represents a geometric shape inside a formal development by a set
of algebraic equations, it doesn't mean that the underlying formal system
must be "geometrical", even if that's how one thinks of it intuitively.)

*

Now a bit of waffle about the formalization of category theory and its
(lack of) relevance to your original claim.

It is indeed difficult to take standard category-theoretic notions such
as "the category of all sets" at face value in the usual first-order ZFC
reduction of mathematics. It was with category theory in mind that I
carefully wrote "most mathematics" rather than "all mathematics".

One way of dealing with such category-theoretic notions is to go beyond
first-order systems, e.g. work in a formulation of ZFC set theory based
on higher-order logic (simple type theory). However, it is easy to
subsume any such higher-order setup in a first-order set theory with
additional axioms, such as the Tarski-Grothendieck axiom of universes
adopted by Mizar. If I recall correctly, there is some discussion of this
in the appendix to Grothendieck et al's "SGA 4". (A similar point has
already been made by Tim Chow.)

Incidentally, if we are considering merely formalization "in principle"
of proofs involving intermediate category-theoretic notions, it seems
likely that the use of universe axioms is not necessary, as a consequence
of the Reflection Principle. This point is made by Kreisel; cf. Saaty,
"Lectures on Modern Mathematics III", p118. Trying to formalize this idea
in such a way that one could take category-theoretic notions at face
value yet finally arrive at a formalization in first-order ZFC would
indeed be a good application for Hans Aberg's pet project of formalizing
metatheory.

*

Let me finally add that I am not in any way opposed to using higher-order
logic in formalizing mathematics. That would be strange indeed, since
I've spent much of the last 12 years doing just that. Higher-order logic
*does* have real advantages as a vehicle for such formalizations. I
merely didn't like the absurdly categorical denial that the first-order
alternative is possible. A first-order formalization is not only
possible, but is in some meaningful sense the "standard" foundation of
mathematics.

John.

Vidhyanath Rao

unread,
Aug 26, 2002, 2:40:31 PM8/26/02
to

"Hans Aberg" <remove...@matematik.su.se> wrote in message
news:remove.haberg-2...@du131-226.ppp.su-anst.tninet.se...

> Do you have a reference (like a good book or a URL)?
> [for Ackermann's theory etc]

There is an informative discussion of various theories of sets and
classes in "Foundations of set theory" by Fraenkel et al, 2nd rev
edition. [I think that the relevant chapter was written by Azriel Levy.]
For mathematical details, though, you probably need to look at the
original papers: start with the references given in that book and work
forward using MathSciNet/Math Reviews or Sci. Citation index. If there
is a more up-to-date book, I would like hear of it too.

> >It should be noted that quite a few papers, in algebraic geometry
> >especially, assume that we can do such things with classes. This is
> >the whole point of universes as mentioned by MacLane.
>
> I am not sure what you mean here: This would be required for the
> description of metacategories if one interprets products A x B
> literally; but I think that one might use predicates to express
> that, in case that would not be needed. -- Perhaps this is what
> Ackermann's set theory does?

It depends on what you mean by 'predicates' here. It can be argued that
theories of classes are alternatives to second order set theories, the
differences being in what you are allowed to do with classes. If by
predicates you mean (first order) sentences in the langauge of set
theory it might be doable if you use global choice (this adds an unary
function symbol \sigma, and an axiom that if x is not empty, then \sigma
x \in x). But then category theory becomes a part of meta-mathematics
and that flies in the face of history.

>
> > Now, assuming
> >arbitrarily large Grothendieck universes is essentially same as
> >Ackermann's set theory + replacement axiom of vNGB/MK. If we use the
> >latter, the dependence on the universe is seen to be only apparent.
>
> Meaning exactly what? As the universe U is enlarged, the category of
> sets Set_U will also enlarge. So there must be some stability
> property making your statement specific.

I am viewing 'sets' as the 'real' objects of mathematics, and classes
(or categories) as a way of speaking about them to make proofs and ideas
easier to understand (and more widely applicable). In that case, a
theorem which goes "for each universe U, there exists X_U in a higher
universe..." and where the dependece is real is not useful. Anyway, we
need to worry about it only when such a theorem is proved and applied to
preexisting math.

The use that I have seen for universes, namely that a result proved for
'small' categories is automatically true for 'large' categories, is true
in Ackermann's theory. The drawback is that we can only do 'definable'
categories of large objects, but it seems that that is all we need (to
date).

> One approach I have heard of is that one may attempt to replace the
> large categories one encounters with an equivalent small one.
> For example, the category of modules over a ring R can be

> constructed as the quotient of R^(I), where I is an index set.

If you are using equivalent in the technical sense, there are no such
things in general. If you meant 'similar', it is not a priori clear that
the category has all needed limits and colimits. There is also the fact
that you run into the same problem as with universes: If things
stabilize as the set I becomes large enough, it is something that needs
to be proved each time, and clarity is improved by not using I in the
first place. And, by design, category theory deals precisely with such
situations.

Hans Aberg

unread,
Aug 26, 2002, 4:22:41 PM8/26/02
to
In article <akdppr$5ke$1...@galois.mit.edu>, tc...@lsa.umich.edu wrote:

>>I am not sure what you are saying here: Are you proposing inventing a new
>>theory, redefining commonly used mathematical terms, where predicates are
>>called "sets" so that one by that can claim all is sets ("proof by
>>definition")?

>If you want to put it that way, yes. What I outlined was completely
>standard.

Please take no offense, but I think you have misunderstood axiomatic set
theory. In naive set theory, one writes
S := { x | P(x) }
and identifies S with P; but that not how axiomatic set theory goes,
instead carefully examining the relation between S and P.

I implemented a Prolog variation where the syntax "x in S" is syntactic
sugar for Prolog like "S(x)", but that led to a number of problems, which
led me to take up the more accurate meta-logical approach I am using now.

>>I think most mathematicians would find that very confusing.

>I agree fully. But as I emphasized before and re-emphasize now, I am
>talking about what can be done *in principle* and not what would be
>useful *in practice*.

Well, that is not interesting in the context of the subject of this thread:

Unless one specifies exactly what the axioms are and how they should be
applied, as done in meta-mathematics, one will not be able to fit them
into a computer proof checking system.

>>And what about
>>axiomatic set theory, is that also developed within axiomatic set theory,
>
>You can if you want, yes.

The point with meta-theory is to avoid meta-logical axioms as much as
possible, so that one can be sure that the meta-logical system is not
contradictory.

>>if now one should put the axioms into a computer and verify the proof of
>>consistency? Which axioms do you propose for your universal axiomatic set
>>theory, if that now should be hardwired into a computer proof checking
>>system?
>
>ZFC, for example.

One has not gone that far to agree on which axioms to use in axiomatic set
theory. Different axiomatic systems have different advantages; see the
post by Vidhyanath Rao, for example.

>>Also which math is entirely beyond questions of rigor? -- When one started
>>to plug hand-computed formulas into symbolic algebra, I think that
>>something like 10% were found to be faulty.

>An interesting question, but in my opinion tangential. I used the adverb
>"essentially" in order to head off things like this, which I don't think
>are directly relevant to the main question under discussion.

I think the thread topic has already been departed from: These suggestions
are too unspecific not only to be fit into a computer proof checking
system but also to do rigorous math. I will believe it when I see it
developed formally, as in meta-mathematics. :-)

For claims such as that "all math can be reduced to this or other set of
axiomatic set theory", I would want to see a proof before believing it.
:-)

If you want to see how well your ideas fit into a computer, but don't want
to write all gory details yourself, you could try using:
Qu-Prolog:
http://www.svrc.it.uq.edu.au/pages/QuProlog_release.html
Earlier versions
http://svrc.it.uq.edu.au/Bibliography/svrc-tr.html?00-20
http://svrc.it.uq.edu.au/Bibliography/svrc-tr.html?00-21

It is a meta-system that allows quantifiers and bound expressions to be
defined, which can be specialized into a suitable system like:
Ergo
http://www.svrc.it.uq.edu.au/pages/Ergo_release.html
The built-in Ergo theories jprop and iprop are based on
Gentzen - "Investigations into Logical Deductions"
Heyting - "Die Formalen Regeln der Intuitionistischen Logik"
Heyting - "Intuitionism: An introduction"

Anyway, I think this thread pretty much has exhausted itself, only
repeating old facts over and over again.

tc...@lsa.umich.edu

unread,
Aug 27, 2002, 6:13:31 PM8/27/02
to
In article <akdsos$9h0$2...@charm.magnus.acs.ohio-state.edu>,

Vidhyanath Rao <nath...@osu.edu> wrote:
>I would like to know how to produce a category of SETs with such properties
>within ZFC. [You can add stronger axioms, but then it is not ZFC.]

I'm not sure I understand exactly what you're asking for, but your mention
of "stronger axioms" sounds to me like you know of an axiomatization that
is sufficient for your intended applications, but that is not known to be
(or is provably not) equiconsistent with ZFC. Yes?

For the point I'm trying to make, ZFC isn't privileged, but was just an
example of an axiomatic set theory. If your system of axioms is
equiconsistent with *some* set theory, obtained by adding set-theoretic
axioms to ZFC, then that pretty much establishes my point that set theory
is one possible foundation for essentially all mathematics. (Actually,
one probably wants a little more than equiconsistency; it would be
preferable to be able to interpret each system in the other in a
straightforward way.) I don't know enough about the particular area
of math you cite to be able to say whether such an extension of ZFC is
easily constructed, but I would be surprised if there are insurmountable
difficulties. Are you aware of such?

Vidhyanath Rao

unread,
Aug 29, 2002, 12:30:08 PM8/29/02
to

"John Harrison" <jo...@ichips.intel.com> wrote in message
news:c616467c.02082...@posting.google.com...

I should perhaps add the disclaimer that I am not interested in first
order vs higher order (which seems to do me, an admitted non-Platonist,
more to do with philosophy than axiomatics/formalization as error
avoiding methods) unless we see classes as extensions of properties, but
then, of course, we can turn that into a two-sorted first order theory
and leave philosphy to philosophers.

> Incidentally, if we are considering merely formalization "in
principle"
> of proofs involving intermediate category-theoretic notions, it seems
> likely that the use of universe axioms is not necessary, as a
consequence
> of the Reflection Principle. This point is made by Kreisel; cf. Saaty,
> "Lectures on Modern Mathematics III", p118.

A better reference would be Feferman's paper in "Reports of the Midwest
Category Seminar III". That goes into more detail. However, that is old
and predates some constructions that have turned out be crucial in
algebraic topology (and, AFAIK, algebraic geometry). This makes the
above remark more troublesome.

Unfortunately, to explain the problem, I need to be longwinded and tax
everybody's patience. I hope people will excuse it just once.

---

ZF(C) + RP (reflection principle) addes a primitive constant V along
with an axiom scheme that says that a statement in the language of ZF is
true if and only if its relativization to V is. If we interpret the
elements of V as sets and arbitrary 'sets' as classes, this implies all
the axioms of Ackermann's set theory (A. Levy). On the other hand, all
the axioms of ZF are true for sets in Ackermann's set theory +
Regularity (Reinhardt for replacement, Ackermann already had proved
others). So the main difference between the two is that ZFC+RP has more
familiar (and stronger?) class construction methods. But the former is
still sets+classes (but called 'small sets' and 'large sets' in
imitation of categorists' terms).

The rub is that we need to be careful about the replacement axiom in
ZFC+RP. A common step in category theory is to take a functor F : A to C
into a complete category C and a small diagram D in A and then take the
limit of F(D). This cannot be done in ZFC+RP. We have to assume that F
takes small sets of morphisms into small sets, which is true if F is
definable but not in general. (this may help understand the difference:
A well order of reals exists in ZFC, but is not definable). Along the
same lines, we often have to impose grwoth conditions on abstract
categories, such as "every small set of morphisms is contained in a
small subcategory that reflects isomorphisms", redefine 'locally small'
as "every small set of objects is contained in a small full subcategory"
and worry about which constructions preserve such properties. We also
have to worry about existence of transfinite iterations [If F : C to C
is an endofunctor of a cocomplete category with a natural transformation
from Id to F, define F^\alpha by F^{\alpha+1} = F F^{\alpha}, and if
\alpha is limit ordinal, then F^{\alpha} is the colimit of F^{\beta},
\beta < \alpha. These occur in constructions of localizations] and
growth conditions on them. Ditto for simplicial objects from monads
(which may come from localiztions which may be transfinite iterations)
and realizations of such simplicial objects collected into a whole as a
functor. My gut feeling is that this is doable. But >far< from trivial.
For example, Neeman's book on triangulated categories takes scores of
pages to show that localizations of locally small triangulated
categories with suitable properties (that are fulfilled in intended
applications) are still locally small. We will need to modify this for
the new definition of locally small. Nor is this a marginal example:
Neeman's book is intended as a careful treatment of the prerequisites
for Voedovsky's applications to motives.

There is also a technical point: Some constructions involve choice. So
not all functors used may be definiable in the language of ZFC. We can
get around this by taking (ZF + global choice) + RP, which assumes
reflection for statements involving choice function symbol also. This is
a priori richer than ZF+RP+choice. The former is almost certainly a
conservative extension of ZFC, but this still needs to be checked.

Vidhyanath Rao

unread,
Aug 29, 2002, 12:26:35 PM8/29/02
to

<tc...@lsa.umich.edu> wrote in message
news:akgtib$3gr$1...@galois.mit.edu...

> I'm not sure I understand exactly what you're asking for, but your
> mention of "stronger axioms" sounds to me like you know of an
> axiomatization that is sufficient for your intended applications, but
> that is not known to be (or is provably not) equiconsistent with ZFC.
> Yes?

And no. The usual out is to invoke Grothendick universes: Fix a universe
U. "Small" objects (such as sets, spaces, varieties) are assumed to be
in U, but "large" categories and functors can be outside (usually in
U* = U \union Powerset(U) \union Powerset(Powerset(U)) \union ...)
But here is the rub: (Co)Completeness must refer to 'small' diagrams, so
we need distinguish between elements of U and of U*, i.e, we have
entities of two different sorts (pun intended). Why not build this into
the language as sets vs classes? [The existing literature on derived
categories does not always pay attention to small vs locally small vs
large and calling everything a set simply masks the problem.]

I personally will be happy with Zermelo (ZF - replacement) for classes
plus suitably modified Morse-Kelly for sets. But set vs classes stays in
some way.

As to whether conservative extensions of ZFC are enough, see my reply to
John Harrison.

tc...@lsa.umich.edu

unread,
Aug 29, 2002, 7:44:51 PM8/29/02
to
In article <aklihk$j5j$3...@charm.magnus.acs.ohio-state.edu>,

Vidhyanath Rao <nath...@osu.edu> wrote:
>Unfortunately, to explain the problem, I need to be longwinded and tax
>everybody's patience. I hope people will excuse it just once.

No need to apologize...that was very interesting and informative.

I'm not sure we're disagreeing all that much. What I think you're saying
is that one needs both sets and classes, in such a way that they're not
easily reducible just to sets (although they still might be reducible, and
in fact your guess is that they *are* reducible, albeit in a nontrivial
way). I had thought that standard tricks, such as reflection, would
suffice for applications; according to you things are not that simple,
and I stand corrected on that point. But I believe we both agree that
(1) first-order logic is still sufficient in principle, whether we stick
with just sets or admit both sets and classes; and (2) even the examples
you gave don't warrant the blanket condemnation that "hoping that set
theory can be a foundation for all math is too naive"---it's just that
the jury's still out on certain specific questions. On both (1) and
(2) I believe I am still in clear disagreement with Hans Aberg. But I
agree with Hans Aberg that we are reaching an impasse and it probably
won't help to keep discussing the matter further.

Hans Aberg

unread,
Aug 30, 2002, 7:04:29 PM8/30/02
to
In article <akmblj$l0j$1...@galois.mit.edu>, tc...@lsa.umich.edu wrote:

> But I believe we both agree that
>(1) first-order logic is still sufficient in principle, whether we stick
>with just sets or admit both sets and classes; and (2) even the examples
>you gave don't warrant the blanket condemnation that "hoping that set
>theory can be a foundation for all math is too naive"---it's just that
>the jury's still out on certain specific questions. On both (1) and
>(2) I believe I am still in clear disagreement with Hans Aberg. But I
>agree with Hans Aberg that we are reaching an impasse and it probably
>won't help to keep discussing the matter further.

First one can note that science is not a democracy, where the majority
decides by vote what the scientific truths should be.

The statement in the Otter manual was that the first order formulations
were not practical, not that common math in principle cannot be translated
to first order. In fact, Mendelson, "Intro to Math Logic", p. 56 notes
that most higher order theories can be reduced to first order.

I retorted that a computer system that would be practicable must be able
to support higher order formulations, if that is what mathematicians use,
even if the computer then internally resolves them via some first order
theories. For example, set theoretic functions appearing as arguments in
functionals can be translated into a first order axiomatic set theory. But
it would not be practically to require the mathematicians to do that
reformulation by hand before the stuff can be plugged into the computer.

This applies to the logic. As for the metalogic used in order to conduct
proofs, I landed on a Prolog implementation that is second order in the
sense of Sterling & Shapiro, "The Art of Prolog", sec. 16.3. Thus, even if
the logic one is using is only first order, the total logic plus metalogic
system is of higher order.

As for axiomatic set theory, Mendelson, p. 56, says that as for the
metalogic, most prefer to use syntactic over semantic formulations, as the
latter leads to set theoretic formulations, and there are logical problems
with those. Thus, on the metalevel, most try to avoid any set lookalikes
due to the problems that may ensue.

Also look into Church's "The Calculi of Lambda-Conversion" to see that it
uses a syntactic, not semantic, formulation, and thus sets do not exist on
this metalevel. That's a great point to the lamba calculus, that it does
not depend on set theory.

Thus, if one should not exclude the typical practises of metamathematics,
one cannot "reduce everything to sets", as at the meta stage, axiomatic
set theory does not exist on the logical level one is reasoning with.

Vidhyanath Rao <nath...@osu.edu> wrote:
[snipped]


>What I think you're saying
>is that one needs both sets and classes, in such a way that they're not
>easily reducible just to sets (although they still might be reducible, and
>in fact your guess is that they *are* reducible, albeit in a nontrivial
>way). I had thought that standard tricks, such as reflection, would
>suffice for applications; according to you things are not that simple,
>and I stand corrected on that point.

It might suffice for Vidhyanath Rao and such theories with using sets and
classes if one never touches the metalevel. But there are some instances
where one can do that (like when speaking about metagraphs and
metacategories). Then axiomatic set theory does not even exist on that
metalevel one is reasoning.

I am now comparing my own experimentatal Prolog implemntation with what is
done in the Qu-Prolog program:

It seems that we are doing essentially the same thing, namely,
implementing the axioms of the working mathematics using the Prolog system
to represent the metalogic used in metamathematics. I use books like the
one by Mendelson, and I figure that Peter Robinson is using his experience
of implementing a meta-Prolog system. Since this is wholly independent
work, there seems to be natural metalevel to walk down.

There seem to be great advantages of this approach: From the mathematical
point of view, it seems that one gets a great control over the proof
processes. This is of course important in math, and one is using many
different logical strategies in order to push through a proof.

Therefore, I still think this is approach to use if one should be able to
admit the complicated types of proofs that mathematicians do use.

David desJardins

unread,
Aug 31, 2002, 8:54:16 PM8/31/02
to
Hans Aberg writes:
> First one can note that science is not a democracy, where the majority
> decides by vote what the scientific truths should be.

Actually that's more or less exactly how science works. The only
difference is that not everyone gets exactly one vote: some people have
more prestige, and their votes count more.

David desJardins

Hans Aberg

unread,
Sep 1, 2002, 6:15:52 AM9/1/02
to
In article <vohbs7i...@math.berkeley.edu>, David desJardins
<da...@desjardins.org> wrote:

First, tying this up to mathematics, I made an extension of the STV
(single transferrable votes) election system to parties: Then, in such a
voting systems, there is nothing strange to have ballots with different
voting values. It actually occurs naturally, as the ballots are split up
in fractions during the counting process.

Otherwise, science in general and math in particular, may work so, but in
as much as it does work so, it is a corruption: In a voting system, one is
free to vote freely according to ones whim.

But a mathematician saying that a proof of the competition is erroneous
because he wants the result not to be published will be viewed as corrupt.
If it would be voting that is used when checking proofs, this practise
would be OK.

The fact that math may work as a corrupt unequal voting system has surely
affected the development of math negatively many times in history:

For example, I recall that Galois sent his result to Cauchy for
publication, and Cauchy didn't respond. There is a rumor that Cauchy used
to look through papers that arrived and first see if he thought he could
prove it; if he thought he could do that, he would write up the results as
his own, not publishing the submitted paper.

I must emphasize that this is just a rumor that a mathematician once
quoted to me: True or not, the thought is interesting, because Cauchy is
known to have many results in many diverse fields but with often faulty
proofs. Perhaps this is why the rumor was started, as it fits the pattern
of his apparent achievements.

Otherwise, it is clear that established scientists often try to use their
"increased vote" to steer science, often towards their own fields on the
expense of other scientists: One of the more notorious examples is perhaps
Newton, who eradicated the scientific records of his competitors (I just
watched a program about him on the "History Channel"). And there is the
infamous Newton-Leibnitz dispute, where somebody noted that if there would
have been two business companies providing mathematical calculus, Newton's
would company would quickly have gone bankrupt, in view of his cumbersome
"fluxion" notations. -- Today, we use the Leibnitz notation for integrals
and derivatives.

It is then up to the scientific community to counteract this tendency of
"voting" by prestige: In the case of the Newton-Leibnitz dispute, some
claimed it turned into a setback for English mathematics for quite some
time, as one was quite nationalistic in those days, and they then ended up
using Newton's cumbersome notation, instead of the much better Leibnitz
notation.

So, summing it up, there are such tendencies, but in as much it happens,
one can trace some impediments as far as the proper scientific
developments are concerned.

In modern math, somebody noted that their has been developed a
"monster-field" developed (consisting of various algebra related
techniques, such as number theory, geometry, groups, homological algebra,
topology, etc):

I think that one component in that such a monster-field is developing
might be a steep hierarchical structure, combined with tight control at
the top. For the monster-field, the control group would be those closely
in control of the Fields medal.

I know that publishing practises are rather corrupted: For example, a
paper in group theory was asked to be rewritten as to look as though it
was a number theoretic paper. A topologist would throw in some rather
unrelated number theoretic formulas in order to get it published. A
wellknown Fields medalist (Serre; I know it is true, because I have a copy
of such a letter :-) ) would not bother refereeing submitted papers, but
merely respond the he would not consider the submitted paper as submitted,
and if the submitted would insist on submitting it, it would not be
published anyhow. In Princeton, they said that one way to make sure a
paper would not be published would be, apart from sending it to an
unfavorable referee, send it along with another paper that the referee
really wants to see published and say that there is room for only one. A
permanent member at the IAS and editor (with strong tendencies of wanting
to form a "school") told me the he would not accept papers of what he
personally considers the wrong field regardless of its quality. The
journal "Inventiones" used to have the practises that for close friends to
the editor (like Borel; I derived this via that I knew when certain
preprints had been submitted and published in the mid-1980'ies), the
publishing time (including refereeing) was six months, but otherwise the
"backlog" caused the publishing time to be three years (same as on "Annals
on Mathematics", I recall). (I think that the AMS tried to do something
about these practises, but failed.) The followups would then be published
at a space of two years, which means that the purpose of scientific
publication as a communication is invalidated: At the time a paper appears
it is already old, as the followup already has been written. So it becomes
impossible for an outsider to produce effective results on that field.

With such practises, the monster-field is steered into dealing with a
rather limited types of subjects, imparting the diversity of the field.

So, for sure, the tendency of "voting by prestige" is probably strong in
contemporary pure math.

I think that one problem that pure math has is that it is not dictated by
outer, environmental, facts as strongly as in experimental science. It
thus gives more room for individuals to control it. The simplest remedy is
probably opening up for mathematicians to work on other things than just
this "monster-field", preferable fertilized by cross-feeding.

Also, a bottleneck in the publishing is the proof-checking: One can always
quote as reason for a result to be published that one doubts the
correctness of the proofs. In reality it is too time-consuming to check
all the proofs. One instead relies on the fact that this is a well
established fellow working within a wellknown field.

However, this situation make it difficult for those that want to establish
themselves selves, and also for working with new cross-fields: In the
latter case, there is really no one available to check the proofs, as the
referees do not team up to resolve their part of what they can understand.
So the paper cannot really be published.

In addition, one can note that many new, unexpected proofs contain
techniques that may appear to be erroneous at the first sight, but which
are correct on closer examination. This fact makes the referring procedure
even more perilous, if one now believes it as a means to ensure scientific
accuracy, which in pure math then includes ensuring the correctness of the
proofs.

Therefore, if one in the future can make a computer proof checking program
that can do that proof checking humans do now, that may easy up that
publishing bottleneck.

But then the requirements on such a program will be much more heavy than
the computer systems we see now: One cannot say, as it frequently has been
said in this thread, that "all mathematics can be reduced to first order
axiomatic set theory system X", as if not all mathematics is developed
that way, it cannot be checked by the computer system: Pure mathematics
itself does not come with any such requirements of a certain axiomatic set
theory system, and if the computer cannot handle it conveniently, humans
will have to do it by hand.

And if humans must do that checking by hand, then one ends up on the old,
costly equation, as human work time is expensive. By contrast computer
time is cheap.

I do not see this any stranger than the principles that eventually led up
to the AMS engaging Don Knuth to do TeX.

RL

unread,
Sep 1, 2002, 10:10:29 AM9/1/02
to
remove...@matematik.su.se (Hans Aberg) writes:

> For example, I recall that Galois sent his result to Cauchy for
> publication, and Cauchy didn't respond.

See Tony Rothman's excellent article at

http://godel.ph.utexas.edu/~tonyr/galois.html

from which

<quote>
We now encounter a major myth which evidently had its origins in the
very first writings on Galois and which had been perpetuated by
virtually all writers since. This myth is the assertion that Cauchy
either forgot or lost the papers (Dupuy, Bell[21]) or intentionally
threw them out (Infeld [22]). Recently, however, Rene Taton has
discovered a letter of Cauchy in the Academy archives which
conclusively proves that he did not lose Galois's memoirs but had
planned to present them to the Academy in January 1830 [23]. There
is even some evidence that Cauchy encouraged Galois. I will discuss
this letter and related events in more detail below; for now I note
only that to hold Cauchy responsible for "one of the major disasters
in the history of mathematics," to paraphrase Bell [24],is simply
incorrect, and to add neglect by the Academy to the list of Galois's
difficulties during this period appears entirely unwarranted.
</quote>

Hans Aberg

unread,
Sep 1, 2002, 1:45:01 PM9/1/02
to

>See Tony Rothman's excellent article at

>http://godel.ph.utexas.edu/~tonyr/galois.html

><quote>


> We now encounter a major myth which evidently had its origins in the
> very first writings on Galois and which had been perpetuated by
> virtually all writers since. This myth is the assertion that Cauchy
> either forgot or lost the papers (Dupuy, Bell[21]) or intentionally
> threw them out (Infeld [22]). Recently, however, Rene Taton has
> discovered a letter of Cauchy in the Academy archives which
> conclusively proves that he did not lose Galois's memoirs but had
> planned to present them to the Academy in January 1830 [23]. There
> is even some evidence that Cauchy encouraged Galois. I will discuss
> this letter and related events in more detail below;

This is interesting.

One should be aware of though that dates in the past were often forged in
order to assert one claims:

For example, there is a myth that Newton was very young when discovering
the results about gravitation.

However, the History Channel program said that by not publishing, one
could set an earlier date at the time of publishing, thus showing to the
world that one was first with the results. So this is most likely what
actually Newton did.

Such practises makes it very difficult to tell who did what. :-)

Herman Rubin

unread,
Sep 1, 2002, 2:59:52 PM9/1/02
to
In article <vohbs7i...@math.berkeley.edu>,
David desJardins <da...@desjardins.org> wrote:

I disagree. Here is a quote from Galileo, and it is still
correct:

"In questions of science, the authority of a thousand is not worth
the humble reasoning of a single individual."

Unfortunately, it is administrators who try to overturn
this, and establish "authority".


--
This address is for information only. I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
hru...@stat.purdue.edu Phone: (765)494-6054 FAX: (765)494-0558


John Baez

unread,
Sep 2, 2002, 8:05:29 PM9/2/02
to
In article <akdppr$5ke$1...@galois.mit.edu>, Tim Chow wrote:

>>Tim Chow wrote:

>>>For example, having
>>>to quantify over proper classes isn't a problem. One simply defines a
>>>CLASS to be a set having certain properties and a SET to be a set having
>>>certain properties.

Right. Sometimes people use the words SET and SMALL SET when they are
playing this game. But sometimes one wants a hierarchy with more
than two sizes of "SET", e.g. when considering the 2-category of all
categories. For this it is best to assume Grothendieck's axiom of
universes, which very roughly speaking establishes an infinite
hierarchy of larger and larger sets with the property that "from inside"
each one looks a lot like the class of all sets. This is pretty widely
done, and one can see it being applied to algebraic topology here:

R. W. Thomason, Symmetric monoidal categories model all connective
spectra, Theory and Appl. of Categories 1 (1995), 78-118.

>Note carefully the distinction between "SET"
>and "set"; lowercase here denotes the axiomatic set theory that I am
>reducing everything else to, and "SET" is a defined concept like "GROUP"
>or "VARIETY" or whatever. Then one can quantify over CLASSes using
>first-order set theory (*not* SET theory of course).

>>I am not sure what you are saying here: Are you proposing inventing a new


>>theory, redefining commonly used mathematical terms, where predicates are
>>called "sets" so that one by that can claim all is sets ("proof by
>>definition")?

The point here is not to "prove" anything, but to establish a
rigorous framework in which you can stop worrying about Russell's
paradox and get on with the business of mathematics.

>If you want to put it that way, yes. What I outlined was completely
>standard. Most textbooks on foundations discuss something like this,
>e.g., Ebbinghaus, Flum and Thomas's _Mathematical_Logic_, or Kunen's
>_Set_Theory_ (maybe not the best references but the two I can think
>of off the top of my head).

You can also find it discussed at the very beginning of Borceux's
book:

Francis Borceux, Handbook of Categorical Algebra, Cambridge U. Press 1994.
Volume 1: Basic Category Theory

>>I think most mathematicians would find that very confusing.

>I agree fully.

I don't think it's so confusing. If it helps to use the terms
"small set" and "large set", that's fine, but the ultimate goal
is simply to stop worrying about Russell's paradox without feeling
guilty about it, and once you see how it's done, one rarely needs
to worry about this stuff.

I should emphasize that most of category theory is *not* about
foundations, so most category theorists are content to learn the
bare minimum about these issues.

>>So how do your category and homology theories look like?

>I already started sketching this; since the complete details are very
>tedious, I won't try to give them all here.

Borceux's book is the usual reference for this, and it has
references to more detailed stuff if one actually cares.

John Harrison

unread,
Sep 3, 2002, 1:29:11 PM9/3/02
to
remove...@matematik.su.se (Hans Aberg) wrote in message news:<remove.haberg-3...@du128-226.ppp.su-anst.tninet.se>...

| The statement in the Otter manual was that the first order formulations
| were not practical, not that common math in principle cannot be translated
| to first order. In fact, Mendelson, "Intro to Math Logic", p. 56 notes
| that most higher order theories can be reduced to first order.

The statement in the Otter manual concerned fully automatic proofs.
Fully automated theorem proving in higher order logic is somewhat more
developed than automated theorem proving in first order set theory (see
TPS for example). But the difference barely registers when seen in the
context of the difficulty of automating non-trivial mathematics. One
will always need human guidance, whatever the foundational system
chosen.

| I retorted that a computer system that would be practicable must be able
| to support higher order formulations, if that is what mathematicians use,
| even if the computer then internally resolves them via some first order
| theories. For example, set theoretic functions appearing as arguments in
| functionals can be translated into a first order axiomatic set theory. But
| it would not be practically to require the mathematicians to do that
| reformulation by hand before the stuff can be plugged into the computer.

What "reformulation" do you have in mind? First order set theory permits
one to reason in a fairly direct way with functions and other sets as
basic objects.

| This applies to the logic. As for the metalogic used in order to conduct
| proofs, I landed on a Prolog implementation that is second order in the
| sense of Sterling & Shapiro, "The Art of Prolog", sec. 16.3. Thus, even if
| the logic one is using is only first order, the total logic plus metalogic
| system is of higher order.

The usual aim of automated theorem proving is to actually do proofs in
a formal system, not prove metatheorems about the formal system. Why do
you keep bringing up metatheory?

| As for axiomatic set theory, Mendelson, p. 56, says that as for the
| metalogic, most prefer to use syntactic over semantic formulations, as
| the latter leads to set theoretic formulations, and there are logical
| problems with those. Thus, on the metalevel, most try to avoid any set
| lookalikes due to the problems that may ensue.

It is indeed traditional in much of proof theory to avoid set-theoretic
apparatus at the meta-level, reflecting the "foundational" roots of the
subject. The most popular framework for meta-reasoning is probably
Primitive Recursive Arithmetic (PRA); cf. Girard's "Proof Theory" book,
passim. This is far from being higher-order --- indeed one common
formulation is quantifier-free. So even if one supposes that metatheory
is of any relevance to this discussion, the usual practice of logicians
working in the field hardly supports a preference for higher-order
logic.

| Also look into Church's "The Calculi of Lambda-Conversion" to see that
| it uses a syntactic, not semantic, formulation, and thus sets do not
| exist on this metalevel. That's a great point to the lamba calculus,
| that it does not depend on set theory.

A proof system for lambda calculus depends on set theory no more and no
less than any other formal proof system, e.g. one for first order logic.

| Thus, if one should not exclude the typical practises of metamathematics,
| one cannot "reduce everything to sets", as at the meta stage, axiomatic
| set theory does not exist on the logical level one is reasoning with.

Insofar as I can make any sense of this statement, it seems to indicate
a confusion between object and meta levels.

John.


David desJardins

unread,
Sep 3, 2002, 3:57:27 PM9/3/02
to
Hans Aberg writes:
> Therefore, if one in the future can make a computer proof checking
> program that can do that proof checking humans do now, that may easy
> up that publishing bottleneck.

It's not going to happen. It might be true in your lifetime that there
will be intelligent computers that you can ask to confirm the
correctness of your mathematical results, but they will still be just
like human mathematicians: they will reason imperfectly and make
mistakes. Indeed, eventually those computers may surpass humans at
"doing mathematics". But they still will be fallible and make mistakes.

Automatic, formal verification of real mathematics isn't a reasonable
thing to expect.

David desJardins


Hans Aberg

unread,
Sep 3, 2002, 6:21:45 AM9/3/02
to
In article <al0uc9$gk7$1...@glue.ucr.edu>,
ba...@math.removethis.ucr.andthis.edu (John Baez) wrote:

>In article <akdppr$5ke$1...@galois.mit.edu>, Tim Chow wrote:
>
>>In article <remove.haberg-2...@du128-226.ppp.su-anst.tninet.se>,
>>Hans Aberg <remove...@matematik.su.se> wrote:
>
>>>Tim Chow wrote:
>
>>>>For example, having
>>>>to quantify over proper classes isn't a problem. One simply defines a
>>>>CLASS to be a set having certain properties and a SET to be a set having
>>>>certain properties.

>Right. Sometimes people use the words SET and SMALL SET when they are
>playing this game. But sometimes one wants a hierarchy with more
>than two sizes of "SET", e.g. when considering the 2-category of all
>categories. For this it is best to assume Grothendieck's axiom of
>universes, which very roughly speaking establishes an infinite
>hierarchy of larger and larger sets with the property that "from inside"
>each one looks a lot like the class of all sets.

>>Note carefully the distinction between "SET"


>>and "set"; lowercase here denotes the axiomatic set theory that I am
>>reducing everything else to, and "SET" is a defined concept like "GROUP"
>>or "VARIETY" or whatever. Then one can quantify over CLASSes using
>>first-order set theory (*not* SET theory of course).

...

>>>I am not sure what you are saying here: Are you proposing inventing a new
>>>theory, redefining commonly used mathematical terms, where predicates are
>>>called "sets" so that one by that can claim all is sets ("proof by
>>>definition")?

>The point here is not to "prove" anything, but to establish a
>rigorous framework in which you can stop worrying about Russell's
>paradox and get on with the business of mathematics.

>>>I think most mathematicians would find that very confusing.

>>I agree fully.

>I don't think it's so confusing. If it helps to use the terms
>"small set" and "large set", that's fine, but the ultimate goal
>is simply to stop worrying about Russell's paradox without feeling
>guilty about it, and once you see how it's done, one rarely needs
>to worry about this stuff.

I thought that axiomatic set theory was developed within a
metamathematical context using a simpler metalogic one is sure is
consistent, that way proving that also the axiomatic set theory is
consistent. Mendelson, "Introduction to Mathematical Logic", p. 56,
writes:

There is still a reason for formal axiomatic approach. Concepts and
propositions which involve the notion of interpretation, and related ideas
such as truth, model, etc., are often called _semantical_ to distinguish
them from _syntactical_ concepts, which refer to simple relations among
symbols and expressions of precise formal languages. Since semantical
notions are set-theoretic in character, and since set theory, because of
the paradoxes, is considered a rather shaky foundation for the study of
mathematical logic, many logicians consider a syntactical approach,
consisting in a study of formal axiomatic theories using only rather weak
number-theoretic methods, to be much safer.
<end of quote>

This way the Russell paradox is resolved by introducing into the axiomatic
set theory under the metalogical study the two concepts "sets" and
"classes" (which some call "small sets" and large sets"), which are
metalogical predicates, not sets within the axiomatic set theory under
study. Variables in this formal theory ranges over well-formed formulas,
not sets in the axiomatic theory under study.

The other poster originally claimed that this metamathematical approach
was unnecessary, and one should instead develop axiomatic set theory
within itself.

Now you fully agree with that approach.

How do you then know that your proofs are logically correct, if you rely
on a set of axioms you not already is sure that they are consistent?

Hans Aberg

unread,
Sep 4, 2002, 7:38:56 AM9/4/02
to
In article <voh8z2i...@math.berkeley.edu>, David desJardins
<da...@desjardins.org> wrote:

>> Therefore, if one in the future can make a computer proof checking
>> program that can do that proof checking humans do now, that may easy
>> up that publishing bottleneck.

>It's not going to happen. It might be true in your lifetime that there
>will be intelligent computers that you can ask to confirm the
>correctness of your mathematical results, but they will still be just
>like human mathematicians: they will reason imperfectly and make
>mistakes. Indeed, eventually those computers may surpass humans at
>"doing mathematics". But they still will be fallible and make mistakes.

This is a very strange statement: :-)

The process of proof verification is a purely formal logical process, and
does not require any intelligence.

The intelligence is required in finding the proof, but that is an
altogether different story.

There are several practical problems in getting the stuff into a computer:
For example, proof by handwaving is not going to ever be acceptable to a
computer, but then again such practises, though common, are not very
acceptable and further error prone to humans as well.

There is a practical problem of putting the right axiomatic systems into
the computer, and yet another in order make it practical to the humans.

But as for the latter, once one gets sufficiently advanced systems up and
working, it should be possible for humans to enter theorems and proofs
interactively, while the computer verifies it. Thus, even though the step
of putting handmade math into the computer is big, it is not as big as one
might think because of this possible human computer interactivity.

>Automatic, formal verification of real mathematics isn't a reasonable
>thing to expect.

I think you mix up two concepts here: Automatic theorem proving, where one
hopes to plug in statements which the computer the supplies the proof of,
and proof verification, where one enters a formal proof which the computer
verifies.

The automatic theorem proving in the form of the idea that one merely
enters a statement and the computer always determines whether it is
provable or not is not formally possible if one assumes the axioms of
natural numbers, in view of Godel's incompleteness theorem. But from the
practical point of view, one will even with much more simpler axioms
quickly end up in non-terminations (infinite loops) and prohibitively long
computation times. It is not determinable by an algorithm to decide when
code of a Turing equivalent language produces a non-termination or not.

But in the cases it works, the Prolog system does verify a statement by
finding a proof, exploring all the possibilities it is told to
investigate. Also, when searched for all possibilities, if no proof has
found, the statement is not provable by the possibilities. But if the
search possibilities are too large the program may not terminate in time
or at all (non-termination).

Also, the relation between an automated theorem prover and a proof
verification system is this: A proof consists of a sequence of statements,
each proved by those before, axioms and premises of the theorem one is
proving, often by indications how one should proceed. Then, from the
automated proving approach, one has a sequence of statements to be
automatically proved by some restrictions on which axiom databases to use,
and which substitutions to use, etc.

This means that the more work is done on developing the automated proving
part, the more freely the human can enter the formal proofs: Instead of
saying exactly which axiom to be used, one could say this or that group of
axioms, etc.

A fully automated proof is then a statement which admits an empty formal
proof. The automated proving system can further be designed so that it can
write out the searches it does when finding the proof.

I can illustrate this problem with say modus ponens:
(1) A, A => B |- B.
Then, if the computer knows that this axiom should be applied at a
particular state in order to prove B, then it is fairly easy to see if
there is a match for the pattern "A => B", and this way one also finds A.

But if the search is first for "A" on the left hand side, one will have to
search through a much larger set of possibilities, as anything will match
the variable "A". Thus, here an instruction that one should first search
for the "A => B" pattern might be appropriate.

Even worse, if one merely throws in the axiom (1) into a bag of axioms,
then B is always a match in the search patterns. Thus, if the system does
not have any sort of redundancy cutoff, one will end up in a
non-termination. Some proof systems do have such cutoffs, but the problem
is not simple, because loops can happen after deeper searches as well.

But it would probably not be a big problem requiring a mathematician to
say that (1) should applied with the second LHS argument pattern searched
first, especially if the system is interactive, because if one has found
the proof, one should know. The problem might be practical though, if one
always must indicate all the uses of modus ponens: Then one might figure
out a way to simplify or automate that. But if the system is too
automated, then it might be a problem if it always tries to apply (1),
even when it is obvious to a human, it should not.

So the proving system will always hang in the balance: The better it is in
finding the proofs by itself, the more convenient it will be for the one
that enters statements and proofs. But one cannot expect the system to
always find the proofs by itself: That is both impossible (in view of the
incompleteness and non-termination undecidability) and unrealistic, as
computers have little idea of the structure that humans want to describe.

David desJardins

unread,
Sep 4, 2002, 3:43:01 PM9/4/02
to
Hans Aberg writes:
> The process of proof verification is a purely formal logical process,
> and does not require any intelligence.

However it requires astronomical computing capability. Translating any
real, interesting mathematics into ZFC, so that you can verify all the
statements step by step, isn't practical or reasonable. That's why
people don't write or check proofs that way now. For the same reason,
computers won't write or check proofs that way in the future.

> I think you mix up two concepts here: Automatic theorem proving, where
> one hopes to plug in statements which the computer the supplies the
> proof of, and proof verification, where one enters a formal proof
> which the computer verifies.

The former is easier and more realistic than the latter. I won't be
surprised to see computers finding mathematical proofs that humans can't
find, in my lifetime. But I think there's less chance of them
generating such proofs that are explicitly checkable, or explicitly
checking them, in a way that admits no possibility of error. (This is
entirely consistent with the fact that we now have organic computers,
human brains, which can do the former but not the latter.)

> But in the cases it works, the Prolog system does verify a statement
> by finding a proof, exploring all the possibilities it is told to
> investigate.

If you think that you can, or will ever be able to, prove Fermat's Last
Theorem by entering it into a Prolog system which explores all of the
possible proofs in Peano Arithmetic, until it finds a valid one, then I
think you're crazy.

David desJardins


tc...@lsa.umich.edu

unread,
Sep 4, 2002, 5:28:07 PM9/4/02
to
In article <remove.haberg-0...@du134-226.ppp.su-anst.tninet.se>,

Hans Aberg <remove...@matematik.su.se> wrote:
>I thought that axiomatic set theory was developed within a
>metamathematical context using a simpler metalogic one is sure is
>consistent, that way proving that also the axiomatic set theory is
>consistent.

No. This was Hilbert's program, which we know is doomed to failure. As
your quote from Mendelson illustrates, some people feel that "finitistic"
methods of proof are reliable whereas set theory is suspect. If you are
such a person, then the good news is that much of mathematical logic can
be developed using finitistic means, so you can be confident that when
you use words like "axiom," "proof," "consistent," "ZFC," and so on, you
are not talking nonsense. But if you are hoping that you can *prove the
consistency* of an axiomatic set theory like ZFC using finitistic means,
then you are out of luck. It's just not possible, as Goedel showed.

The advantage of axiomatic set theory is not that it is provably
consistent (it's not, unless you allow the use of arguments that are even
more suspect than standard set-theoretical arguments are) but that it
avoids all known paradoxes and makes clear (even to a finitist) exactly
what kinds of logical inferences about sets are allowed and what kinds
are not allowed.

>How do you then know that your proofs are logically correct, if you rely
>on a set of axioms you not already is sure that they are consistent?

If you have persistent doubts about the consistency of ZFC, there is
not much anybody can do to alleviate those doubts; there is no way to
"bootstrap" up from a weaker system that you're more confident about.
But the same can be said about your "simpler metalogic"; a skeptic
might not be sure that your metalogic is consistent, and there's not
much you can do to win him over. Conversely, long experience with ZFC
has convinced many people that it is consistent.

The upshot is that at some point you have to take something on faith.
In the old days there may have been good reason to be unwilling to take
set theory on faith, but nowadays there is less reason for such skepticism.

Dan Asimov

unread,
Sep 4, 2002, 10:21:28 PM9/4/02
to
As I see it, there is no real way around the paradox of the set of all
sets. The standard ruse -- to define things called classes that
aren't sets -- has always struck me as just as effective as when the
police chase the drug dealers and prostitutes out of the public parks
-- What happens is that the problem does not disappear, it just goes
elsewhere.

Whatever word or definition you dream up for the concept of a
"collection", there will never be such a thing as the collection of
all collections without introducing a paradox.

Since in the mathematics that I do, I never need to worry about such
large objects, I and many others can "go about the business of doing
mathematics" as one poster wrote.

But *not* because the set of all sets (or Russell's) paradox has truly
been made to disappear.

--Dan Asimov


Hans Aberg

unread,
Sep 5, 2002, 7:40:42 AM9/5/02
to
In article <al5tt7$8uo$1...@galois.mit.edu>, tc...@lsa.umich.edu wrote:

>>I thought that axiomatic set theory was developed within a
>>metamathematical context using a simpler metalogic one is sure is
>>consistent, that way proving that also the axiomatic set theory is
>>consistent.
>
>No. This was Hilbert's program, which we know is doomed to failure. As
>your quote from Mendelson illustrates, some people feel that "finitistic"
>methods of proof are reliable whereas set theory is suspect. If you are
>such a person, then the good news is that much of mathematical logic can
>be developed using finitistic means, so you can be confident that when
>you use words like "axiom," "proof," "consistent," "ZFC," and so on, you
>are not talking nonsense.

I did not mention "finiteness" methods, but a metalogic within which
axiomatic set theory does not exist.

> But if you are hoping that you can *prove the
>consistency* of an axiomatic set theory like ZFC using finitistic means,
>then you are out of luck. It's just not possible, as Goedel showed.

Mendelson, and the other books in metamathematics I have, do use an
infinite number of symbols. But their metalogic does not rely on axiomatic
set theory, but on a syntactic metalogic. They prove Goedel's theorem.
Also Church's "The calculi of lambda conversion" use a syntactic approach,
that does not depend on set theory; the natural numbers are constructed
via Church numbers.

>If you have persistent doubts about the consistency of ZFC, there is
>not much anybody can do to alleviate those doubts; there is no way to
>"bootstrap" up from a weaker system that you're more confident about.

My guess is that you want to get into the discussion about absolute vs.
Relative consistency, forcing, etc., here.

I have a vague memory that the absolute consistency (that not all theorems
are true) is not or cannot be proven, only the relative of certain axiom.

>But the same can be said about your "simpler metalogic"; a skeptic
>might not be sure that your metalogic is consistent, and there's not
>much you can do to win him over. Conversely, long experience with ZFC
>has convinced many people that it is consistent.

This is true: The meta-meta-logic should be the same as the meta-logic.
(But I have not formally derived it).

But it is still so that the descriptions of axiomatic set theory that I
have is using a syntactic metalogic which does not rely on axiomatic set
theory. The metalogical system is also quite simple to implement into a
computer. Strictly speaking, the implementation translates the syntactic
metalogical constructs into an inner semantic model which merely expresses
the intent of the original syntactic model.

So if one is interested, one might work back, and get a more exact
semantic metalogic. :-)

>The upshot is that at some point you have to take something on faith.
>In the old days there may have been good reason to be unwilling to take
>set theory on faith, but nowadays there is less reason for such skepticism.

I think rather that the bootstrap is that logic originates in a
description of nature.

As a curiosity note on the faith part, I think that some Dadaists invented
the "theory of contradiction", which just as an any logic uses its own
logic as metalogic. :-)

Vidhyanath Rao

unread,
Sep 5, 2002, 11:16:27 AM9/5/02
to

"John Baez" <ba...@math.removethis.ucr.andthis.edu> wrote in message
news:al0uc9$gk7$1...@glue.ucr.edu...

> Right. Sometimes people use the words SET and SMALL SET when they are
> playing this game. But sometimes one wants a hierarchy with more
> than two sizes of "SET", e.g. when considering the 2-category of all
> categories. For this it is best to assume Grothendieck's axiom of
> universes, which very roughly speaking establishes an infinite
> hierarchy of larger and larger sets with the property that "from
inside"
> each one looks a lot like the class of all sets.

Apart from the objections I mentioned in the thread Automatic theorem
Prover [I should have mentioned that A. Levy "Role of classes in set
theory" and Benabou "Fibered categories as foundations" J. Sym Logic
raised similar objections.], there is another one which I desited from
mentioning, but not anymore:

People claim, sometimes vehemently, that published proofs that depend on
Grothendieck universes in the published version don't really need it,
and that ZFC is sufficient: see
http://www.math.psu.edu/simpson/fom/
(some threads from 1998/9) for a particularly nasty example. Since there
is no metatheorem of this sort, and categorical arguments often invoke
products whose size (must be rank if are serious about sticking to ZFC)
is not a priori bounded, such claims are unjustified and voilate the
standards of rigor that we mathematicians wave about in public and the
view of the relationship of ZFC to foundations that are offered to
students.

> The point here is not to "prove" anything, but to establish a
> rigorous framework in which you can stop worrying about Russell's
> paradox and get on with the business of mathematics.

But can you stop worrying about Russell's paradox without paying
constant attention (not always done) to the small/large distinction?
Note that there is no "2-category of all categories", but only a
2-category of all U-small categories", for a >given< universe U. We are
back to Levy's objection.

zirkus

unread,
Sep 7, 2002, 5:31:11 PM9/7/02
to
"Vidhyanath Rao" <nath...@osu.edu> wrote in message news:

> theory" and Benabou "Fibered categories as foundations" J. Sym Logic
> raised similar objections.],

Hi, what are "fibered categories" and would they happen to include a
general category C (with a faithful functor m: C -> Mf) over fibered
manifolds? Thanks.


Hans Aberg

unread,
Sep 4, 2002, 7:38:48 AM9/4/02
to
In article <c616467c.0209...@posting.google.com>,
jo...@ichips.intel.com (John Harrison) wrote:

>| I retorted that a computer system that would be practicable must be able
>| to support higher order formulations, if that is what mathematicians use,
>| even if the computer then internally resolves them via some first order
>| theories. For example, set theoretic functions appearing as arguments in
>| functionals can be translated into a first order axiomatic set theory. But
>| it would not be practically to require the mathematicians to do that
>| reformulation by hand before the stuff can be plugged into the computer.

>What "reformulation" do you have in mind? First order set theory permits
>one to reason in a fairly direct way with functions and other sets as
>basic objects.

I gave examples:

For example, F(f), where F is a functional and f is a function, is a
second order formulation. The corresponding axiomatic set theory first
order formulation is "f in F".

If a system should be correct with respect to axiomatic set theory, it
should be able to do such reformulations internally in order to be
practical. Such an implementation should have (perhaps depending somewhat
on the particular axiomatic set theory one is relying on) two unary
predicates "set(x)" and "class(x)", and a binary predicates "x in y". Then
the higher order formulation is "F(f)" is reduced to that first order
axiomatic set theory in order to become formally correct.

If one merely assumes that "F and f" are sets, but does not implement that
formally, that is not axiomatic set theory, but naive set theory. One can
still do much math with that approach, but one does not have assurance of
formal logical correctness the way one has in an axiomatic set theory
approach.

>| This applies to the logic. As for the metalogic used in order to conduct
>| proofs, I landed on a Prolog implementation that is second order in the
>| sense of Sterling & Shapiro, "The Art of Prolog", sec. 16.3. Thus, even if
>| the logic one is using is only first order, the total logic plus metalogic
>| system is of higher order.

>The usual aim of automated theorem proving is to actually do proofs in
>a formal system, not prove metatheorems about the formal system. Why do
>you keep bringing up metatheory?

The metatheory allows one to work with different logical systems, and also
shows up in the way mathematicians choose proof strategies. For example,
direct proofs and proofs by contradiction correspond to different
metatheorems.

>From the point of view implementation, the simple Prolog machinery
corresponds well to the simple metalogic one is using, but not well to the
complicated axiomatic set theory (see footnotes in Mendelson, p. 56).

This is more a "proof verification" approach than an "automated theorem
proving" approach, as the latter is not realistic with respect to typical
working math (meaning that if one tries to proof automate all working math
as to make sure it is verifiable, one will quickly end up with big chunks
for which the approach does not work with).

Also, the axiomatic set theory works in math, lending from computer lingo,
as the "interface" to the axioms one is studying: If say the real numbers
R are implemented via axiomatic set theory, then if pi in R is the usual
circle circumference/diameter ratio, then it is possible to say "all x in
pi" and that would have a specific axiomatic set theoretic formulation
with respect to the particular construction of R one relies on.

However, if the axiomatic set theory is considered as the implementation
of the axioms of R , which servers as an interface, then the system should
be able to notice that "all x in pi" is left unspecified, and should
properly provide an diagnostic.

> The most popular framework for meta-reasoning is probably
>Primitive Recursive Arithmetic (PRA); cf. Girard's "Proof Theory" book,
>passim. This is far from being higher-order --- indeed one common
>formulation is quantifier-free. So even if one supposes that metatheory
>is of any relevance to this discussion, the usual practice of logicians
>working in the field hardly supports a preference for higher-order
>logic.

Well, I have always said that one should use the simpler metalogic as a
basis for the Prolog implementation, as then one can avoid using
quantifiers and such on that level.

As for the order of that Prolog level metalogic, I can only refer to the
discussion in Sterling & Shapiro, "The Art of Prolog", ch. 16; evidently
standard Prolog has some higher order hardwired primitives. As for the
second order formulations of loc. cit., sec. 16.3, I can only note that I
implemented such in the code I am experimenting with, simply because it
turned out to be practical.

I can note that the metalogical implementation I do right now is strictly
first order in the sense of metalogic, in view of that predicates and
functions do not appear as arguments. However, I do admit the variable
matching which is not admitted in first order Prolog systems.

Also note the difference between the order of the formulation and the
order of the underlying axiomatic theory, as discussed above.

If you want to do something else in your system, please go ahead. But also
note that mathematicians always tend to try to break the limits, so if one
puts into ideas that "most math is done in this or that way", one may
accidentally lop of a rather large chunk of math, because a lot of it
relies in part on stuff that explores the outer limits and beyond of
axiomatic set theory.

>| Also look into Church's "The Calculi of Lambda-Conversion" to see that
>| it uses a syntactic, not semantic, formulation, and thus sets do not
>| exist on this metalevel. That's a great point to the lamba calculus,
>| that it does not depend on set theory.

>A proof system for lambda calculus depends on set theory no more and no
>less than any other formal proof system, e.g. one for first order logic.

So what do you mean here by "no more and no less"? -- One would think that
either lambda calculus depends on axiomatic set theory or it does not (the
latter is the case).

>| Thus, if one should not exclude the typical practises of metamathematics,
>| one cannot "reduce everything to sets", as at the meta stage, axiomatic
>| set theory does not exist on the logical level one is reasoning with.
>
>Insofar as I can make any sense of this statement, it seems to indicate
>a confusion between object and meta levels.

Probably in the mind of the reader. :-)

You have snipped the material out its context: I think that there was a
suggestion to use axiomatic set theory as the primitive logic, so that all
reasoning must be done within that.

But on the metalevel in metalogic, axiomatic set theory is an object under
study, not a logic one reasons with. So that approach would exclude
typical metalogical practises. This is then discussed in the footnotes of
Mendelson, "Intro to math logic", p.56.

George Russell

unread,
Sep 5, 2002, 6:45:23 AM9/5/02
to
David desJardins wrote:
>
> Hans Aberg writes:
> > The process of proof verification is a purely formal logical process,
> > and does not require any intelligence.
>
> However it requires astronomical computing capability. Translating any
> real, interesting mathematics into ZFC, so that you can verify all the
> statements step by step, isn't practical or reasonable. That's why
> people don't write or check proofs that way now. For the same reason,
> computers won't write or check proofs that way in the future.
[snip]
However translating the *statement* of interesting theorems into ZFC, or
some higher-order theorem-proving language such as Isabelle, is not so hard.
Furthermore, even though it isn't practical to translate Wiles' proof of
FLT into such a language, the translation would not (I think) be so long that
a modern computer couldn't verify it pretty quickly.

I don't think it particularly implausible that before I retire in 2034, there will
be quantum computers, which will be able to execute non-deterministic algorithms in
about the same time as modern computers can execute deterministic ones. So then you
would not have to supply a verifier with proof; you will simply have to pose it the
question "Is there a proof for statement S with length <= N", and assuming verifying
a proof of length N is feasible for current computers, such a quantum computer would
be able to answer "Yes" or "No". With more work it should be able to output a proof
(if the answer was yes), though I don't know enough about quantum computers to know
if that could be done except by repeatedly asking questions of the form "Is there a
proof for statement S with length <= N beginning with bits 010101...". Also I suspect
such computer-generated proofs would be extremely hard to understand.

Of course this is pure fantasy. If the optimists had been right, we would all by now
be using optical or supercomputing computers. Still it is certainly possible.


gowan

unread,
Sep 7, 2002, 12:22:38 PM9/7/02
to
"Vidhyanath Rao" <nath...@osu.edu> wrote in message news:<al874d$saj$1...@charm.magnus.acs.ohio-state.edu>...

> "John Baez" <ba...@math.removethis.ucr.andthis.edu> wrote in message
> news:al0uc9$gk7$1...@glue.ucr.edu...

> > The point here is not to "prove" anything, but to establish a


> > rigorous framework in which you can stop worrying about Russell's
> > paradox and get on with the business of mathematics.
>
> But can you stop worrying about Russell's paradox without paying
> constant attention (not always done) to the small/large distinction?
> Note that there is no "2-category of all categories", but only a
> 2-category of all U-small categories", for a >given< universe U. We are
> back to Levy's objection.

Russell's paradox was originally proposed in the context of
discussions of foundations, Frege's work in particular. It didn't
come up in "ordinary" mathematics. Apparently the possible
difficulties with Grothendieck's stuff also haven't led to immediate
contradictions either, i.e. "ordinary" theorems proved using those
tools haven't been contradicted yet. (Is that so? I'm no algebraic
geometer). Russell's paradox is one of those self-reference things.
The type hierarchy removes the self-reference. Are there such
"self-referential" usages in Grothendieck's work?


Agustí Roig

unread,
Sep 9, 2002, 1:27:19 PM9/9/02
to
zir...@hotmail.com (zirkus) wrote in message news:<8c7d34cb.02090...@posting.google.com>...


You can find a definition in:


SGA 1, Revêtements Etales et Groupe Fondamental
Exposé VI, Catégories fibrées et descente
Springer LNM 224


Essentially, a fibered category is a "projection" functor,

P: A ---> E .

You may think A as being the union of the "fibers" A_x , for every
object x of E: the subcategories of objects of A such that Pa = x
and morphisms such that Pf = 1_x .

You must have also a "reciprocal image functor"

f^*: A_y ---> A_x ,

for every morphism f in E, together with a canonical isomorphism of
functors

f^* g^* ---> (gf)^*

for every pair of composable morphisms f , g in E .


An example. Let R be a ring and M(R) the category of R-modules. If
f: R ---> S is a ring morphism, you have a "reciprocal image functor"

f^* : M(S) ---> M(R) ,

the restriction of coefficients. Let E be the category of all rings.

Now, take A in the previous definition to be the "union" of all
these M(R) , for every ring R . You can write objects of A as
couples (R,M), where R is a ring and M is a R-module. Morphisms
of A are also couples

(f,\phi): (R,M) ---> (S,N)

where f: R ---> S is a ring morphism and \phi: M ---> N is an
f-equivariant morphism (i.e., \phi (ax) = fa \phi x, for every a in
R and every x in M ). You have a projection functor, P: A ---> E,
P(R,M) = R , and A_R = M(R) .

Agustí Roig

Mark

unread,
Sep 9, 2002, 1:27:19 PM9/9/02
to
Tim Chow wrote:
>For example, having
>to quantify over proper classes isn't a problem. One simply defines a
>CLASS to be a set having certain properties and a SET to be a set having
>certain properties.

ba...@math.removethis.ucr.andthis.edu (John Baez) writes:
>Right. Sometimes people use the words SET and SMALL SET when they are
>playing this game. But sometimes one wants a hierarchy with more
>than two sizes of "SET", e.g. when considering the 2-category of all
>categories.

As per the subject header, there are also set theories (not as well
known) where there is indeed a set of all sets. Church and Quine
spent a large amount of time with these kinds of theories. The
one that's discussed the most is Quine's New Foundations (NF).

Equiconsistency is not known, as far as I'm aware, with ZF; but
is known with respect to a variation of Russell's Simple Type Theory
(TST).

Daryl McCullough

unread,
Sep 9, 2002, 4:27:20 PM9/9/02
to
whop...@alpha2.csd.uwm.edu (Mark Hopkins) says...

>As per the subject header, there are also set theories (not as well
>known) where there is indeed a set of all sets. Church and Quine
>spent a large amount of time with these kinds of theories. The
>one that's discussed the most is Quine's New Foundations (NF).
>
>Equiconsistency is not known, as far as I'm aware, with ZF; but
>is known with respect to a variation of Russell's Simple Type Theory
>(TST).

If you take New Foundations and drop the assumption that everything
is a set (that is, if you allow ur-elements) then the resulting theory, NFU, is
provably consistent relative to ZF
(http://math.boisestate.edu/~holmes/holmes/nf.html).

It's a kind of strange result. For ZF, allowing ur-elements has no effect on the
consistency strength, since you can define the class of "pure sets" by
transfinite recursion. However, I guess you can't do a similar thing in NF
because its sets are not well-founded.

Another interesting fact about the strength of variations of NF (quoting from
the web page above): "Robert Solovay has shown that the extension of NFU +
Infinity + Choice with the axiom 'all Cantorian sets are strongly Cantorian'
(this axiom was proposed for NF by C. Ward Henson) has the exact strength of ZFC
+ 'there is an n-Mahlo cardinal' for each concrete natural number n"

--
Daryl McCullough
Ithaca, NY

Alex

unread,
Sep 10, 2002, 10:27:23 AM9/10/02
to
gow...@hotmail.com (gowan) wrote in message news:<fab2a28f.02090...@posting.google.com>...


I suppose there is a fact about Universes,
-The Grothendieck theory of universes- to note. Namely, if there is someway
to get a contradiction after adding Grothendieck universes, then there
was already a contradiction before adding universes.
By the way, there is no way to consider the category of all sets.
First you choose a set X and universes U and V, (two of them) such
that
X\in U\subseteq V

then what you have is the following, a theory involving U-data will
give V-results. That's what always is assumed in the first lines of
Grothendieck works. As an example, consider the process of taking
limits, -something happening in everyday life, like sheafification-
you will have U-sets, -or objects- and you get a V-set -or object-.
You can not do it in one universe. Special considerations, must be
proved directly. Let me give you another example: Consider the
process of category localizations, as Verdier's generalization of
construction of derived categories from an additive
category. If you dont have any boundedness condition on complexes
(bounded above, below or both sides) then you have the following
theorem:

T h e o r e m: (Verdier's Localization Theorem) If A is any abelian
U-category, then there is a derived V-category D(A), with the usual
universal property among other triangulated categories and
triangulated functors K(A)--> T, where K(A) is the U-category of
complexes with homotopic morphisms got identified.

You may see it somewhere like this: The 'class' of morphisms between
two typical objects in a general derived categories, is not
necessarily a set.

But, for usual cases, say A=abelian groups, then one can use injective
resolutions (Spalenstein version) to show that, you have actually a
category in the sense of maclane; hom sets are small.

I hope it helps. please tell me if I have wrong impression.
Alex.

Agustí Roig

unread,
Sep 10, 2002, 5:27:18 PM9/10/02
to
Maybe it would help to add two things.



> You must have also a "reciprocal image functor"
>
> f^*: A_y ---> A_x ,


1st. These "reciprocal image functors" must satisfy a certain
universal property (see SGA1 or, excuse me for quoting my own work at
the same time as Grothendieck's, "Model category structures in
bifibered categories", JPAA, where, for instance, you can find how to
make calculations of limits and colimits in (bi)fibered categories).



> An example. Let R be a ring and M(R) the category of R-modules.

2nd. Another important example: take E to be the category of
topological spaces and A the category of sheaves over all
topological spaces.

Agustí Roig

John Baez

unread,
Sep 11, 2002, 9:57:19 AM9/11/02
to
In article <al874d$saj$1...@charm.magnus.acs.ohio-state.edu>,
Vidhyanath Rao <nath...@osu.edu> wrote:

>"John Baez" <ba...@math.removethis.ucr.andthis.edu> wrote in message
>news:al0uc9$gk7$1...@glue.ucr.edu...

>> Right. Sometimes people use the words SET and SMALL SET when they are
>> playing this game. But sometimes one wants a hierarchy with more
>> than two sizes of "SET", e.g. when considering the 2-category of all
>> categories. For this it is best to assume Grothendieck's axiom of
>> universes, which very roughly speaking establishes an infinite
>> hierarchy of larger and larger sets with the property that "from
>> inside" each one looks a lot like the class of all sets.

>Apart from the objections I mentioned [...]

I agree that not all problems can be easily circumvented by the
above trick. However, a lot of them can, and I've never personally
done any category theory where the axiom of universes isn't enough.

>there is another one which I desisted from


>mentioning, but not anymore:
>
>People claim, sometimes vehemently, that published proofs that depend on
>Grothendieck universes in the published version don't really need it,
>and that ZFC is sufficient: see
>http://www.math.psu.edu/simpson/fom/
>(some threads from 1998/9) for a particularly nasty example. Since there

>is no metatheorem of this sort, [...]

... people shouldn't make these claims! I agree.

>[...] such claims are unjustified and violate the


>standards of rigor that we mathematicians wave about in public and the
>view of the relationship of ZFC to foundations that are offered to
>students.

I don't personally make any claims that ZFC is the right
foundations for mathematics; my willingness to accept new axioms
like the axiom of universes whenever they make things convenient is
a testament to this. Anyone who goes around telling students that
ZFC underlies all mathematics is being a bit silly, though I suppose
it's okay if it lures the students into actually learning some set
theory, at which point they're realize it ain't so simple.

>> The point here is not to "prove" anything, but to establish a
>> rigorous framework in which you can stop worrying about Russell's
>> paradox and get on with the business of mathematics.

>But can you stop worrying about Russell's paradox without paying
>constant attention (not always done) to the small/large distinction?

It depends what you mean by "paying constant attention".
I need to tie my shoes every morning, but I wouldn't say I
"pay constant attention" to this chore: most of the time,
I hardly notice that I'm doing it!

>Note that there is no "2-category of all categories", but only a
>2-category of all U-small categories", for a >given< universe U.

That's fine with me! But I don't think we need to constantly
highlight the role of the universe U. In many contexts it's
sufficient to put in a little footnote saying "by the 2-category
of all categories we really mean the 2-category of all U-small
categories", and get on with business.


Vidhyanath Rao

unread,
Sep 11, 2002, 5:57:20 PM9/11/02
to

"Alex" <pure_mat...@yahoo.com> wrote in message
news:fjnf9.19376$m7.1...@vixen.cso.uiuc.edu...

> I suppose there is a fact about Universes,
> -The Grothendieck theory of universes- to note. Namely, if there
> is someway
> to get a contradiction after adding Grothendieck universes, then there
> was already a contradiction before adding universes.

No. In fact, assuming just one Grothendieck universe is stronger than
Morse-Kelly which is strictly stronger than ZFC. Assuming n+1
Grothendieck universes is strictly stronger than assuming n.

Vidhyanath Rao

unread,
Sep 11, 2002, 5:57:20 PM9/11/02
to

"zirkus" <zir...@hotmail.com> wrote in message
news:8c7d34cb.02090...@posting.google.com...

> Hi, what are "fibered categories" and would they happen to include a
> general category C (with a faithful functor m: C -> Mf) over fibered
> manifolds? Thanks.

Two other references, giving just categorical aspects: Chapter 8 of
Borceux "Handbook of categorical algebra", vol.2 and Jacobs,
"Categorical logic and type theory". I am not sure what you mean by the
second question.

zirkus

unread,
Sep 13, 2002, 9:57:19 AM9/13/02
to
"Vidhyanath Rao" <nath...@osu.edu> wrote in message news:

> Two other references, giving just categorical aspects: Chapter 8 of


> Borceux "Handbook of categorical algebra", vol.2 and Jacobs,
> "Categorical logic and type theory".

I thank you and Agusti for your replies. It turns out I do know what a
fibered category is, but I had never heard of Benabou so I didn't know
if he meant a different concept. Here's some of what I know about
fibered categories:

We might say that a category F is a fibered category over A if it acts
like a sheaf of categories over A, where I'm thinking of A as a
category of schemes of finite type over SpecC. A fibered category F
over the category of schemes A is a functor p: F -> A.

I know that weak stacks are implemented via fibered categories (in the
sense of Grothendieck). In [1] by Anders Kock, a stack is a fibered
category p: X -> B with a descent property wrt some covering notion in
B, and he defines a certain fibered category of principal G-bundles as
a stack. He also has another paper on principal G-bundles and
groupoids [math/0005125]. (Btw, the isomorphism of categories fibered
in groupoids is a morphism which is an equivalence of categories).

I'm interested in these kinds of thoughts for this reason:

Roughly, a natural bundle is a functor from the category of
m-dimensional manifolds to the category of fibered manifolds, and a
gauge natural (GAN) bundle is a functor from the category of principal
G-bundles to the category of fibered manifolds and fiber-respecting
mappings. (For more precise definitions and explanations of these
concepts see Section 3 on page 6 of [math.DG/0201235] and pages
138-139 of Section 14, 169-170 of Section 18 and 394-396 of Section 51
of this free book via the internet [2]).

Natural bundles are "bundle functors" - see the pages I just mentioned
in [2] and Slovak's paper "Bundle Functors on Fibred Manifolds" [3].
Sometimes the term "Weil functor" is used (which are related to Weil
algebras) and these can be naturally transformed into bundle functors.
Here's this paper on Weil functors extended to infinite dimensional
manifolds via the theory of C^{infinity} algebras [math.DG/9611222].

GAN bundles and operators allow a unifying framework for natural field
theories like GTR, gauge theories, as well as bosonic and fermionic
matter theories. The GAN framework e.g. implies that tetrads should be
replaced by "spin-tetrads" which are useful for some applications in
physics (see gr-qc/0006032 and 0201079).

Perhaps it might be fun to try to "higher categorize" some of these
concepts, but I'm not sure that I could do it. For instance, the paper
on Weil functors makes use of the concept of surjection. I'm not sure
but I think that Dan Christensen once mentioned that the category
theory version of surjection is effective epimorhisms (whatever those
are). Anyways, hopefully someone might at least learn something from
these ramblings.


[1] http://www.ml.kra.se/preprints/archive/2000-2001/2000-2001-27.ps

[2] http://www.emis.de/monographs/KSM/

[3] http://citeseer.nj.nec.com/26790.html

Mitchell Smith

unread,
Sep 13, 2002, 10:50:16 PM9/13/02
to

Dan Asimov wrote:

> >Whatever word or definition you dream up for the concept of a
> >"collection", there will never be such a thing as the collection of
> >all collections without introducing a paradox.
>

The paradox arises because you fail to recognize that your underlying
intuition for a collection is a totally disconnected topology. By virtue
of the axiom schema of separation it is impossible for a language user to
prove that the class universe is not a connected topology for which every
proper subspace is totally disconnected.

The foundation of mathematics is about language reference--not objects.
Hereditarily defined classes are the unique object type which exhibit both
object semantics and collection semantics. Because of this, it is
possible to characterize the identity predicate with respect to
topological separation rather than extensionality.

The axiom of extension is the culprit behind much of the confusion in
foundations. It is derived from solutions for problems which do not
concern mathematics. It found its way into mathematics because much of
foundational thinking was decided by philosophers who were imposing a
monolithic logic.

If you look at the philosophy of Immanuel Kant, you will find a kinder,
gentler philosophy which exposes the mistakes of the last century. His
distinction between analytic cognitions and synthetic cognitions provides
a foundation for rejecting the premises of conventional model theory.

Now, if topological separation is used for characterizing distinctness
(and, thereby, the identity predicate), then the class universe is
distinct from its elements because its elements are members of a
collection in which the class universe is not (namely, itself). The
problem, however, is not to worry about a "set of all sets." Rather, how
does a language user establish reference to the class universe without
also establishing type heirarchies?

Instead of using the membership predicate, think about set containment.
Actually, think about both.

If you begin trying to formally define the membership predicate by drawing
Venn diagrams for intuition, you will quickly realize that you need
convergence under the nested set theorem of real analysis. That is, just
to draw a sequence of circles converging to a point in a Venn diagram, you
need to have a notion of a transitive relation.

Classical set theory never defines its predicates. The philosophers had
to consider a problem called infinite regress with regard to definitions.
The solution was to formulate a system based on undefined language
primitives.

In my humble opinion, that would have to completely fail even the most
naive use case analysis of mathematicians as language users. Wouldn't you
think? Indeed, mathematicians define their terms.

If you want to formally define the membership predicate, you will need to
introduce a transitive relation first. This can be easily formulated
relative to the conditional of zero-order logic. Quantification provides
for syntax that translates the transitivity of the conditional connective
between formulas to the transitivity of a predicate between variable
language terms.

Now, Venn diagrams are composed of "circles" and "dots" in a manner of
speaking. Your transitive relation relates "circles" to "circles" in
order to characterize the nesting properties you need from your original
thought experiment.

In contrast, your membership predicate relates "dots" to "circles." Not
only does this characterize the "circles" with collection semantics, but,
it sets up a notion of identity based on topological separation of points
because "circles" can be used to separate "dots."

Do not worry about sets as objects at this point. Indeed, within the
Kantian framework you are mandated to visualize your synthetic cognition
before formalizing it. Focus on the Venn diagrams first. Then see if
everything else still makes sense.

If, instead of set theory, you look at an arbitrary first-order model you
will quickly realize that the "dot" representations correspond to the term
referents of the language. Arbitrary object types do not have collection
semantics, so they do not have "circle" representations. Thus, with
regard to your formal definition of the membership predicate for the
purpose of describing hereditarily defined sets, you are doing nothing
more than introducing language reference into a universe of "circles."

I have formalized all of this into first-order language. It works well.
It turns out that the satisfaction predicate is the lax containment
relation. This allows reference to the class universe as the greatest
poset without creating the conditions of Russell's paradox. Collection
semantics for the class universe are imposed by the explicit axiomatic
assertion of a property called almost universality.

So, in the end you get a connected topology whose proper subspaces are
totally disconnected in the sense of the axiom schema of separation. The
total disconnection accounts for the intuition of classes as collections.
But, their existence as topologies accounts for the intuition of classes
as objects.

Why topologies? Every group satisfies the group axioms. But, not every
class contains an infinite element. Not every class is closed under the
axiom of pairing. Not every class contains all of its subsets (in the
sense of closure under the axiom schema of separation). In short, the
axioms of set theory do not describe sets--they describe the set
universe. So, it becomes necessary to understand sets in terms of what
they have in common with their container class rather than the other way
around.

Simply put, the subspaces of topologies are topologies and
disconnectedness provides for collection semantics. Moreover, the
perspective has to be taken from the top down in order to avoid
heirarchies.

As for your suspicions concerning Russell's paradox, let me leave you with
a quote from Immanuel Kant:

To know what questions may reasonably be asked is already
a great and necessary proof of sagacity and insight. For if a
question is absurd in itself and calls for an answer where none
is required, it not only brings shame on the propounder of the
question, but may betray an incautious listener into absurd
answers, thus presenting the spectacle of one man milking a
he-goat and the other holding a sieve underneath.


Mitch Smith
mit...@enteract.com

Mitchell Smith

unread,
Sep 16, 2002, 9:57:26 AM9/16/02
to

Dan Asimov wrote:

> >Whatever word or definition you dream up for the concept of a
> >"collection", there will never be such a thing as the collection of
> >all collections without introducing a paradox.
>

The paradox arises because you fail to recognize that your underlying

0 new messages