Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

This Week's Finds in Mathematical Physics (Week 80)

190 views
Skip to first unread message

John Baez

unread,
Apr 21, 1996, 3:00:00 AM4/21/96
to
This Week's Finds in Mathematical Physics - Week 80
John Baez

There are a number of interesting books I want to mention.

Huw Price's book on the arrow of time is finally out! It's good to see
a philosopher of science who not only understands what modern physicists
are up to, but can occaisionally beat them at their own game.

Why is the future different from the past? This has been vexing people
for a long time, and the stakes went up considerably when Boltzmann
proved his "H-theorem", which seems at first to show that the entropy of
a gas always increases, despite the time-reversibility of the laws of
classical mechanics. However, to prove the H-theorem he needed an
assumption, the "assumption of molecular chaos". It says roughly that
the positions and velocities of the molecules in a gas are uncorrelated
before they collide. This seems so plausible that one can easily to
overlook that it has a time-asymmetry built into it --- visible in the
word "before". In fact, we aren't getting something for nothing in the
H-theorem; we are making a time-asymmetric assumption in order to
conclude that entropy increases with time!

The "independence of incoming causes" is very intuitive: if we do an
experiment on an electron, we almost always assume our choice of how to
set the dials is not correlated to the state of the electron. If we
drop this time-asymmetric assumption, the world looks rather
different... but I'll let Price explain that to you.

Anyway, Price is an expert at spotting covertly time-asymmetric
assumptions. You may remember from "week26" that he even got into a
nice argument with Stephen Hawking about the arrow of time, thanks to
this habit of his. You can read more about it in:

1) Huw Price, Time's Arrow and Archimedes' Point: New Directions for a
Physics of Time, Oxford University Press, 1996. Table of contents and
first chapter available at http://plato.stanford.edu/price/TAAP.html


Also, there is a new book out by Hawking and Roger Penrose on quantum
gravity. First they each present their own ideas, and then they duke it
out in a debate in the final chapter. This book is an excellent place
to get an overview of some of the main ideas in quantum gravity. It
helps if you have a little familiarity with general relativity, or
differential geometry, or are willing to fake it.

There is even some stuff here about the arrow of time! Hawking has a
theory of how it arose, starting from his marvelous "no-boundary
boundary conditions", which say that the wavefunction of the universe is
full of quantum fluctuations corresponding to big bangs which erupt and
then recollapse in big crunches. The wavefunction itself has no obvious
"time-asymmetry", indeed, time as we know it only makes *within* any one
of the quantum fluctuations, one of which is presumably the world we
know! But Hawking thinks that each of these quantum fluctuations, or at
least most of them, should have an arrow of time. This is what Price
raised some objections to. Hawking seems to argue that each quantum
fluctuation should "start out" rather smooth near its big bang and
develop more inhomogeneities as time passes, "winding up" quite wrinkly
near its big crunch. But it's not at all clear what this "starting out"
and "winding up" means. Possibly he is simply speaking vaguely, and all
or most of the quantum fluctuations can be shown to have one smooth end
and wrinkly at the other. That would be an adequate resolution to the
arrow of time problem. But it's not clear, at least not to me, that
Hawking really showed this.

Penrose, on the other hand, has some closely related ideas. His "Weyl
curvature hypothesis" says that the Weyl curvature of spacetime goes to
zero at initial singularities (e.g. the big bang) and infinity at final
ones (e.g. black holes). The Weyl curvature can be regarded as a
measure of the presence of inhomogeneity --- the "wrinkliness" I alluded
to above. The Weyl curvature hypothesis can be regarded as a
time-asymmetric law built into physics from the very start.

To see them argue it out, read

2) Stephen Hawking and Roger Penrose, The Nature of Space and Time,
Princeton University Press, 1996.


There are also a couple of more technical books on general relativity
that I'd been meaning to get ahold of for a long time. They both
feature authors of that famous book,

3) Charles Misner, Kip Thorne and John Wheeler, Gravitation, Freeman
Press, 1973,

which was actually the book that made me decide to work on quantum
gravity, back at the end of my undergraduate days. They are:

4) Ignazio Ciufolini and John Archibald Wheeler, Gravitation and
Inertia, Princeton University Press, 1995.

and

5) Kip Thorne, Richard Price and Douglas Macdonald, eds., Black Holes:
The Membrane Paradigm, 1986.

The book by Ciufolini and Wheeler is full of interesting stuff, but it
concentrates on "gravitomagnetism": the tendency, predicted by general
relativity, for a massive spinning body to apply a torque to nearby
objects. This is related to Mach's old idea that just as spinning a
bucket pulls the water in it up to the edges, thanks to the centrifugal
force, the same thing should happen if instead we make lots of stars
rotate around the bucket! Einstein's theory of general relativity was
inspired by Mach, but there has been a long-running debate over whether
general relativity is "truly Machian" --- in part because nobody knows
what "truly Machian" means. In any event, Ciufolini and Wheeler argue
that gravitomagnetism exhibits the Machian nature of general relativity,
and they give a very nice tour of gravitomagnetic effects.

That is fine in theory. However, the gravitomagnetic effect has never
yet been observed! It was supposed to be tested by Gravity Probe B, a
satellite flying at an altitude of about 650 kilometers, containing a
superconducting gyroscope that should precess at a rate of 42
milliarcseconds per year thanks to gravitomagnetism. I don't know what
ever happened with this, though: the following web page says "Gravity
Probe B is expected to fly in 1995", but now it's 1996, right? Maybe
someone can clue me in to the latest news.... I seem to remember some
arguments about funding the program.

6) The story of Gravity Probe B, http://www-leland.stanford.edu/~michman/RELATIVITYmosaic/GPBmosaic/GPB.html#GPBstory

Kip Thorne's name comes up a lot in conjuction with black holes and the
LIGO --- or Laser-Interferometer Gravitational-Wave Observatory --- project.
As pairs of black holes or neutron stars spiral emit gravitational
radiation, they should spiral in towards each other. In their final
moments, as they merge, they should emit a "chirp" of gravitational
radiation, increasing in frequency and amplitude until their ecstatic
union is complete. The LIGO project aims to observe these chirps, and
any other sufficiently strong gravitational radiation that happens to be
passing by our way. LIGO aims to do this by using laser interferometry
to measure the distance between two points about 4 kilometers apart to
an accuracy of about 10^{-18} meters, thus detecting tiny ripples in the
spaceteim metric. For more on LIGO, try

7) LIGO project home page, http://www.ligo.caltech.edu/

Thorne helped develop a nice way to think of black holes by envisioning
their event horizon as a kind of "membrane" with well-defined
mechanical, electrical and magnetic properties. This is called the
membrane paradigm, and is useful for calculations and understanding what
black holes are really like. The book "Black Holes: The Membrane
Paradigm" is a good place to learn about this.


Now let me return to the tale of 2-categories. So far I've said only
that a 2-category is some sort of structure with objects, morphisms
between objects, and 2-morphisms between morphisms. But I have been
attempting to develop your intuition for Cat, the primordial example of
a 2-category. Remember, Cat is the 2-category of all categories! Its
objects are categories, its morphisms are functors, and its 2-morphisms
are natural transformations --- these being defined in "week73" and
again in "week75".

How can you learn more about 2-categories? Well, a really good place is
the following article by Ross Street, who is one of the great gurus of
n-category theory. For example, he was the one who invented
omega-categories!

8) Ross Street, Categorical structures, in Handbook of Algebra, vol. 1,
ed. M. Hazewinkel, Elsevier, 1996.

Physicists should note his explanation of the Yang-Baxter and
Zamolodchikov equations in terms of category theory. If you have
trouble finding this, you might try

9) George Kelly and Ross Street, Review of the elements of 2-categories,
Springer Lecture Notes in Mathematics 420, Berlin, 1974, pp. 75-103.

I can't really compete with these for thoroughness, but at least let me
give the definition of a 2-category. I'll give a pretty nuts-and-bolts
definition; later I'll give a more elegant and abstract one. Readers
who are familiar with Cat should keep this example in mind at all times!

This definition is sort of long, so if you get tired of it, concentrate
on the pictures! They convey the basic idea. Also, keep in mind is
that this is going to be sort of like the definition of a category, but
with an extra level on top, the 2-morphisms.

So: first of all, a 2-category consists of a collection of "objects" and a
collection of "morphisms". Every morphism f has a "source" object and a
"target" object. If the source of f is X and its target is Y, we write
f: X -> Y. In addition, we have:

1) Given a morphism f: X -> Y and a morphism g: Y -> Z, there
is a morphism fg: X -> Z, which we call the "composite" of f and g.

2) Composition is associative: (fg)h = f(gh).

3) For each object X there is a morphism 1_X: X -> X, called the
"identity" of X. For any f: X -> Y we have 1_X f = f 1_Y = f.

You should visualize the composite of f: X -> Y and g: Y -> Z as
follows:

f g
X ---------> Y ---------> Z

So far this is exactly the definition of a category! But a 2-category
ALSO consists of a collection of "2-morphisms". Every 2-morphism T has
a "source" morphism f and a target morphism g. If the source of T is f
and its target is g, we write T: f => g. If T: f => g, we require that
f and g have the same source and the same target; for example, f: x -> y
and g: x -> y. You should visualize T as follows:

f
---->---
/ \
x T y
\ /
---->----
g

People usually draw a double arrow like => going down next to the T, but
I can't do that here.

In addition, we have:

1') Given a 2-morphism S: f => g and a 2-morphism T: g => h, there is a
2-morphism ST: f => h, which we call the "vertical composite" of S and
T.

2') Vertical composition is associative: (ST)U = S(TU).

3') For each morphism f there is a 2-morphism 1_f: f => f, called the
"identity" of f. For any T: f => g we have 1_f T = T 1_g = T.

Note that these are just like the previous 3 rules. We draw the vertical
composite of S: f => g and T: g => h like this:

f
---->----
/ S \
/ g \
x ----->----- y
\ T /
\ /
---->---
h

Now for a twist. We also require that, given morphisms f,f': x -> y
and g,g': y -> z, and 2-morphisms S: f => f' and T: g => g', there is a
2-morphism S.T: fg => f'g', called the "horizontal composite" of S and
T. We draw it as follows:

f f'
---->--- ---->---
/ \ / \
x S y T z
\ / \ /
---->---- ---->----
g g'

Finally, we demand the "exchange law" relating horizontal and vertical
composition:

(ST).(S'T') = (S.S')(T.T')

This makes the following 2-morphism unambiguous:

f f'
---->---- ---->----
/ S \ / S' \
/ g \ / g' \
x ----->----- y ----->----- z
\ T / \ T' /
\ / \ /
---->--- ---->---
h h'

We can think of it either as the result of first doing two vertical
composites, and then one horizontal composite, or as the result of first
doing two horizontal composites, and then one vertical composite!

Here we can really see why higher-dimensional algebra deserves its name.
Unlike category theory, where we can visualize morphisms as
1-dimensional arrows, here we have 2-morphisms which are intrinsically
2-dimensional, and can be composed both vertically and horizontally.

Now if you are familiar with Cat, you may be wondering how we vertically
and horizontally compose natural transformations, which are the
2-morphisms in Cat. Let me leave this as an exercise for now... there's
a nice way to do it that makes Cat into a 2-category. This exercise is
a good one to build up your higher-dimensional algebra muscles.

In fact, we could have invented the above definition of 2-category
simply by thinking a lot about Cat and what you can do with categories,
functors, and natural transformations. I'm pretty sure that's more or
less what happened, historically! Thinking hard enough about nCat leads
us on to the definition of (n+1)-categories....

But that's enough for now. Typing those diagrams is hard work.

-----------------------------------------------------------------------
Previous issues of "This Week's Finds" and other expository articles on
mathematics and physics, as well as some of my research papers, can be
obtained by anonymous ftp from math.ucr.edu; they are in the subdirectory
pub/baez. The README file lists the contents of all the papers. On the
World-Wide Web, you can get these files by going to

http://math.ucr.edu/home/baez/README.html

If you're mainly interested in "This Week's Finds", you might try

http://math.ucr.edu/home/baez/twf.html

but if you are cursed with a slow connection and just want the latest
issue, go to

http://math.ucr.edu/home/baez/this.week.html

john baez

unread,
Apr 21, 1996, 3:00:00 AM4/21/96
to
In article <4lcfhf$h...@noise.ucr.edu> ba...@math.ucr.edu (John Baez) writes:
>Now for a twist. We also require that, given morphisms f,f': x -> y
>and g,g': y -> z, and 2-morphisms S: f => f' and T: g => g', there is a
>2-morphism S.T: fg => f'g', called the "horizontal composite" of S and
>T. We draw it as follows:
>
> f f'
> ---->--- ---->---
> / \ / \
> x S y T z
> \ / \ /
> ---->---- ---->----
> g g'

I forgot to state that we demand that horizontal composition be
associative, and that the identity 2-morphisms are also identities for
horizontal composition. To be precise:

1'') Given morphisms f,f': x -> y and g,g': y -> z, and 2-morphisms
S: f => f' and T: g => g', there is a 2-morphism S.T: fg => f'g', which
we call the "horizontal composite" of S and T.

2'') Horizontal composition is associative: (S.T).U = S.(T.U).

3'') The identities for vertical composition are also the identities for
horizontal composition. That is, given f,g: x -> y and T: f => g we
have 1_{1_x}.T = T.1_{1_y} = T.

(All these zillions of clauses in the definition of 2-category will
follow automatically from the more terse definition I'll give later.)

I also lost whatever shreds of credibility I might have had by getting
Kelly's name wrong; he's "G. Maxwell Kelly", usually known as Max Kelly.
Thanks to James Dolan for catching this and also various typos.

Doug Natelson

unread,
Apr 21, 1996, 3:00:00 AM4/21/96
to
ba...@math.ucr.edu (John Baez) writes:
>This Week's Finds in Mathematical Physics - Week 80

[deletia about gravitomagnetism]

>
>That is fine in theory. However, the gravitomagnetic effect has never
>yet been observed! It was supposed to be tested by Gravity Probe B, a
>satellite flying at an altitude of about 650 kilometers, containing a
>superconducting gyroscope that should precess at a rate of 42
>milliarcseconds per year thanks to gravitomagnetism. I don't know what
>ever happened with this, though: the following web page says "Gravity
>Probe B is expected to fly in 1995", but now it's 1996, right? Maybe
>someone can clue me in to the latest news.... I seem to remember some
>arguments about funding the program.
>
>6) The story of Gravity Probe B, http://www-leland.stanford.edu/~michman/RELATIVITYmosaic/GPBmosaic/GPB.html#GPBstory
>

Well, I don't work for GP-B, but I can tell you what I know, for what it's
worth....

Gravity Probe B (sometimes jokingly referred to as 'The Project that Ate
Stanford' :) ) continues along the path towards its eventual launch.
The last I heard (a few months ago) was that they had a launch date
scheduled for 1999, with efforts on the Stanford campus winding up
around 18 months before that. As with all NASA missions, the satellite
needs to be done and ready for launch prep stuff well in advance of
the launch date.

GP-B has been beset by the now-familiar annual funding battle in
Washington. Seems like every year or two, someone nixes the budget
for it. Then, the two PIs, John Turneure and Francis Everitt, fly to
DC and manage to get the funding revived. Pretty impressive job
of keeping the program alive in this era of budget cuts. Sometime in the
last two years, there was a review done of the project by (IIRC)
the National Academy, in order to evaluate the project's status and its
likely completion schedule. Don't know how that worked out exactly,
but given that GP-B is still here, it must've been fairly favorable.

An interesting historical note: GP-B was originally conceived by
Leonard Schiff around 30 years ago. At the time, it was not really
technologically feasible, but work's been going on ever since. Blas
Cabrera, a professor here whose group is developing cryogenic particle
detectors for use in dark matter searches, got his PhD on GP-B around
twenty years ago (his thesis allowed the achievement of the lowest
magnetic fields *ever*, necessary for the GP-B precession measurement
not to be influenced by stray flux through the sensing SQUIDs).

Hopefully a direct participant in the project will post a more
concrete description of what's going on currently....

Douglas Natelson
random physics grad student


Greg Weeks

unread,
Apr 23, 1996, 3:00:00 AM4/23/96
to
: [Moderator's note: Well, I've read it. I found it very clear-headed,
: which is why I recommend it. ... I think he does a decent job of
: explaining how all perceived asymmetries boil down to just one --- the
: low-entropy beginning of the universe.

Uh oh. :-)

I hope some of you will indulge me and consider a trivial thought
experiment:

Add some salt to a container. Then add pepper. Shake. The two spices
mix together and then stay mixed.

* Why does this happen? Well, it is hopeless to consider the motion of
* the individual granules. But we can say that the arrangements of
* granules with the spices significantly mixed outnumber the remaining
* arrangements by a STAGGERING numerical factor. So it is EXCEEDINGLY
* likely that the spices will end up significantly mixed.

Now consider the same experiment from the time-reversed viewpoint. At
first the container has thoroughly mixed spices. Then it is shaken.
It is hopeless to consider the motion of the individual granules. We
can say, though, that it is EXCEEDINGLY likely that the spices will
remain mixed. BUT THEY DON'T.

Does anyone out there agree that the starred argument is bogus and that the
mixing of the spices remains mysterious?

When I read Price's 1st chapter I figured that _he_ at least would agree.
However, note that the salt/pepper universe starts out with an amazingly
low entropy; but the result is still unexplained: Why is it okay to reason
from past to future but not from future to past? The low entropy at the
"beginning" of the universe does not solve the mystery. If Price says it
does, then maybe he _wouldn't_ agree that the mixing of the spices is
mysterious.

Maybe I'm just nuts. I mean, I once told someone at a nighttime party that
the stars you see looking up might actually be on the other side of the
earth. I was drunk, though, and I noticed my mistake promptly. Still, it
shows how low I can go. Am I doing it again? :-?


Greg Weeks


PS: Or maybe the mystery is real but it can't be resolved from
considerations of (quasi)classical physics. What do _you_ think? Wouldn't
that be weird?

PPS: I believe Boltzmann killed himself, as did Ehrenfest. It doesn't pay
to think about this stuff too much. Hey, I'm just kidding! I think. No,
really.


Wayne Hayes

unread,
Apr 25, 1996, 3:00:00 AM4/25/96
to
In article <4lj3rh$n...@news.dtc.hp.com>, Greg Weeks <we...@dtc.hp.com> wrote:
> Add some salt to a container. Then add pepper. Shake. The two spices
> mix together and then stay mixed.
>
> * Why does this happen? Well, it is hopeless to consider the motion of
> * the individual granules. But we can say that the arrangements of
> * granules with the spices significantly mixed outnumber the remaining
> * arrangements by a STAGGERING numerical factor. So it is EXCEEDINGLY
> * likely that the spices will end up significantly mixed.
>
> Now consider the same experiment from the time-reversed viewpoint. At
> first the container has thoroughly mixed spices. Then it is shaken.
> It is hopeless to consider the motion of the individual granules. We
> can say, though, that it is EXCEEDINGLY likely that the spices will
> remain mixed. BUT THEY DON'T.

The difference is the "initial conditions". In the first case, you
start with a random set of initial conditions, from which a random
outcome happens after time T, and when you choose a random outcome you
are overwhelmingly assured of a "mixed" outcome. Taking that final
outcome and reversing time in an unperturbed Newtonian system converts
the final outcome to an initial condition, but NOT RANDOMLY CHOSEN. It
is a very very special initial condition, one that *by construction*
leads, after time -T, to an unmixed state.

The reason I specified "in an unperturbed Newtonian system" is because
the process has to be completely and exactly reversible for this to
happen. Any small peturbations while the experiment is being reversed,
caused either by trucks roaming around outside the building or by quantum
fluctuations, will likely disturb the experiment away from the trajectory
leading back to the unmixed state.

I think the interesting question is now, what if the Universe were in
fact Newtonian? Then, as a whole, the Universe should be reversible and
the arrow of time is carved-in-stone. But in a Universe that allows
quantum fluctuations in both time directions, the arrow of time becomes
a problem again. Is a quantum system reversible? Does a photon passing
"backwards in time" through a polarizing filter regain a polarization
that is not along the axis of the filter?

--
"Rent control is second only to bombing || Wayne Hayes, wa...@cs.utoronto.ca
as a way of destroying a city." || Astrophysics & Computer Science
-- Henry Spencer || http://www.cs.utoronto.ca/~wayne

Greg Weeks

unread,
Apr 25, 1996, 3:00:00 AM4/25/96
to
Alex Kasman (kas...@alpha.math.uga.edu) wrote:

: (Notice how the asymmetry of time shows up here as nothing but our belief
: that the past is fixed whereas the future is probabilistic.)

This belief is certainly strong enough to obtain the usual results, since
it precludes even considering how an imperfectly known system can evolve
from future to past. And, as always, a mystery remains: Why is the past
fixed and the future probabilistic instead of the reverse? (This isn't my
favorite way of stating the mystery, but right now I'm trying to follow the
pea under every shell it hides under.)

: So, here is what I think the second law of thermodynamics should say:
:
: We don't know if entropy will increase or decrease, but if you have to
: bet, bet on an increase.

This is certainly correct in practice. But I was mostly concerned with
_why_ it is correct -- eg, why is the asymmetric belief that you mention
above preferable to its time reverse? And that remains a mystery.

Greg


LBsys

unread,
Apr 25, 1996, 3:00:00 AM4/25/96
to

Im Artikel <4lmram$q...@agate.berkeley.edu>, kas...@alpha.math.uga.edu
(Alex Kasman) schreibt:

>Maybe I'm as crazy as Greg Weeks -- nah, I couldn't be *that* crazy --
>but his argument above reminds me of my own thoughts on the matter.
>Now, following in his footsteps, let me come out of the closet with my
>own doubts (or maybe confusion) about the second law of
>thermodynamics.

There is so many physicists, who say "Oh, 2. law, ahem, well you know, we
calculate with enthropie, works out right, but actually... " then they
put their hand up to hide their mouth " I just can't make myself *believe*
in it, there's something wrong with it!"

>I tend to agree with Mr. Weeks (or maybe I'm just reading my own
>thoughts into his discussion) that what people really mean is that it
>is very UNLIKELY that a state will evolve in such a way that it
>becomes more ``orderly'' as time passes. Basically, I think people
>are willing to believe in a low entropy state in the past, because it
>is already happened, but are not willing to believe in it happening in
>the future because...well why WOULD it happen? (Notice how the


>asymmetry of time shows up here as nothing but our belief that the
>past is fixed whereas the future is probabilistic.)

[Nice lotterie argument (1-2-3-4-5 has got the same probability as any
other sequence) snipped]

>So, here is what I think the second law of thermodynamics should say:
>
>We don't know if entropy will increase or decrease, but if you have to
>bet, bet on an increase.

If probability is the clue to the mysteries of the 2. law, then it should
happen all around us from time to time, shouldn't it?

I have a deep 'feeling' about the 2. law being incomplete or just one of
the two eqn of the real set, b/c something obviously gets lost here (the
order of things) but, unlike anywhere else in physics, it get's lost
without a trace. Maybe the thought of conservation is engraved too deep in
my head, but I just cannot think of something to get lost without
something else to gain.

To add to my confusion: in the above salt-and-pepper experiment replace
the salt with whey (leftovers of milk, when mainly all the fat is taken
away) and replace the pepper with cream (i.e. just take fresh milk). Shake
very well (like they do in the dairy). Wait. Look how mysterious forces
separate your mixture with high entropy wrt fat/water particles to a state
of the lowest entropy possible (wrt that): fat and water unmix completely.
Gravity has done it. Look at how stars and galaxies form. Isn't that a
lower state of entropie wrt to the order of the previous dust cloud? And
what about the entropy of a black hole? Could it be, that gravity and
entropy are counteracting? Flip sides of one coin?


Lorenz Borsche (FRG)

Dubium sapientiae initium. [Descartes]


Jakob Schiotz

unread,
Apr 25, 1996, 3:00:00 AM4/25/96
to
Greg Weeks (we...@dtc.hp.com) wrote:
: I hope some of you will indulge me and consider a trivial thought
: experiment:

: Add some salt to a container. Then add pepper. Shake. The two spices


: mix together and then stay mixed.

: * Why does this happen? Well, it is hopeless to consider the motion of
: * the individual granules. But we can say that the arrangements of
: * granules with the spices significantly mixed outnumber the remaining
: * arrangements by a STAGGERING numerical factor. So it is EXCEEDINGLY
: * likely that the spices will end up significantly mixed.

: Now consider the same experiment from the time-reversed viewpoint. At
: first the container has thoroughly mixed spices. Then it is shaken.
: It is hopeless to consider the motion of the individual granules. We
: can say, though, that it is EXCEEDINGLY likely that the spices will
: remain mixed. BUT THEY DON'T.

: Does anyone out there agree that the starred argument is bogus and that the


: mixing of the spices remains mysterious?

No, I think the starred argument is perfectly valid. The reason that the
timereversed eksperiment gives a "surprising" result is that you started
with an unusual initial condition: the single state that actually is the
result of the mixing that you are going to apply. Since there are so many
more mixed states than sorted states, it is exceedingly LIKELY that a
given RANDOM mixed configuration remains mixed. But you explicitely picked
one of the few that don't.

While I am babbling: You sometimes see the claim that the "entropic arrow
of time", the "concious arrow of time" (we remember the past, not the
future) and the "cosmological arrow of time" (the universe expands) are
coupled, and if the universe stops expanding and starts contracting the
other two "arrows of time" will change as well. It seems highly unlikely to
me that the process of mixing salt and pepper is influenced by the expansion
of the universe, but then I am no cosmologist (nor a philosopher). :-)

Jakob

--
Jakob Schiotz ! Fax: +1 (314) 935 6219
Department of Physics ! Phone: +1 (314) 935 4968
Washington University ! Email: sch...@howdy.wustl.edu
St. Louis, MO 63130, USA ! WWW: http://nils.wustl.edu/schiotz.html


Greg Weeks

unread,
Apr 25, 1996, 3:00:00 AM4/25/96
to
This may be too late, but:

Greg Weeks (we...@dtc.hp.com) wrote:
: * Why does this happen? Well, it is hopeless to consider the motion of
: * the individual granules. But we can say that the arrangements of
: * granules with the spices significantly mixed outnumber the remaining
: * arrangements by a STAGGERING numerical factor. So it is EXCEEDINGLY
: * likely that the spices will end up significantly mixed.

: Now consider the same experiment from the time-reversed viewpoint. At
: first the container has thoroughly mixed spices. Then it is shaken.
: It is hopeless to consider the motion of the individual granules. We
: can say, though, that it is EXCEEDINGLY likely that the spices will
: remain mixed. BUT THEY DON'T.

Initially I did not explicitly state the point of the above two paragraphs
(which may have led to some confusion). The point was that both paragraphs
use exactly the same state-counting argument. In the first case the
argument gives the correct result, but in the second case it doesn't. So
the argument itself is bogus. It only works reasoning from past to future.


Greg


james dolan

unread,
Apr 25, 1996, 3:00:00 AM4/25/96
to

greg weeks writes:

>It may be incredibly unlikely, but it happens [in the time-reversed view]
>every single time I try it.


you mean: it happens every single time you pre-(or is it post-
???????)select the results to make sure that it happens!! try picking
more of a random sample and see what happens!!


Huw Price

unread,
Apr 26, 1996, 3:00:00 AM4/26/96
to we...@dtc.hp.com

we...@dtc.hp.com (Greg Weeks) wrote:
>: [Moderator's note: Well, I've read it. I found it very clear-headed,
>: which is why I recommend it. ... I think he does a decent job of
>: explaining how all perceived asymmetries boil down to just one --- the
>: low-entropy beginning of the universe.
>
>Uh oh. :-)
>
>I hope some of you will indulge me and consider a trivial thought
>experiment:
>
> Add some salt to a container. Then add pepper. Shake. The two spices
> mix together and then stay mixed.
>
> * Why does this happen? Well, it is hopeless to consider the motion of
> * the individual granules. But we can say that the arrangements of
> * granules with the spices significantly mixed outnumber the remaining
> * arrangements by a STAGGERING numerical factor. So it is EXCEEDINGLY
> * likely that the spices will end up significantly mixed.
>
> Now consider the same experiment from the time-reversed viewpoint. At
> first the container has thoroughly mixed spices. Then it is shaken.
> It is hopeless to consider the motion of the individual granules. We
> can say, though, that it is EXCEEDINGLY likely that the spices will
> remain mixed. BUT THEY DON'T.
>
>Does anyone out there agree that the starred argument is bogus and that the
>mixing of the spices remains mysterious?
>
>When I read Price's 1st chapter I figured that _he_ at least would agree.
>However, note that the salt/pepper universe starts out with an amazingly
>low entropy; but the result is still unexplained: Why is it okay to reason
>from past to future but not from future to past? The low entropy at the
>"beginning" of the universe does not solve the mystery. If Price says it
>does, then maybe he _wouldn't_ agree that the mixing of the spices is
>mysterious.
>


Let me try to clarify Price's view on this:

I do think that the low entropy "beginning" is the only real mystery
in this area.

I agree with Weeks that his starred argument is bogus, if taken as
an *explanation* of the mixing.

However, I don't agree that this leaves a mystery about the mixing,
beyond that of the low entropy boundary condition. Why not? Because
the mixing is equally an unmixing, viewed from the opposite temporal
perspective, and the low entropy boundary condition does explain the
unmixing. At least, it explains the unmixing so long as what now
appears to be the initial condition -- the mixed spices -- doesn't
need explanation, and I think this does follow from the usual statistical
considerations.

If the statistical argument is thought of simply a test for what
needs explaining, then it can validly be used "at both ends". At
one end, it shows that we do need an explanation. At the other end,
it shows that we don't. In effect, then, all the weight of explaining
the entropy gradient in between the two ends is carried by whatever
turns out to explain the low entropy end. Modern cosmology offers a
plausible story tracing this to the early universe, and that's where
the mystery currently rests.

I hope this clarifies my view. At http://plato.stanford.edu/price/publications.html
there's a preprint (in RTF format) of a conference paper in which I talk
about this -- look for "Chaos Theory and the Difference between Past and
Future" in the Unpublished Preprints section. It's not as thorough as
ch. 2 of my book, but might be of interest. Here's an excerpt:

"This conclusion applies to the countless individual processes in
which entropy increases, as well as to the Second Law in general.
Consider what happens when we remove the top from a bottle of beer,
for example: pressurized gas and liquid escape from the bottle.
Traditionally it has been taken for granted that we need to explain
why this happens, but I think this is a mistake. The gas escapes
simply because its initial microstate is such that this is what
happens when the bottle is opened. As the tradition recognizes,
however, this isn't much of an explanation, for we now want to know
*why* the initial microstate is of this kind. But the correct lesson
of the statistical approach is that this kind of microstate doesn't
need explanation, for it is (overwhelmingly) the most natural condition
for the system in question to possess. What does need to be explained
is why the microstate of the gas is such that, looked at in reverse,
the gas enters the bottle; for it is in this respect that the
microstate is unusual. And in the ordinary time sense, this just
a matter of explaining how the gas comes to be in the bottle in
the first place."


Huw Price
School of Philosophy
University of Sydney
http://plato.stanford.edu/price/


LBsys

unread,
Apr 26, 1996, 3:00:00 AM4/26/96
to

Im Artikel <4lojsu$m...@agate.berkeley.edu>, jbu...@nwu.edu (Joshua W.
Burton) schreibt:
>[I wrote about the unmixing of milk and cream and the 2. law]

>> Wait. Look how mysterious forces separate your mixture with high
>> entropy wrt fat/water particles to a state of the lowest entropy
>> possible (wrt that): fat and water unmix completely.
>
>This is a confusion of the meaning of "unmix". When the droplets
>of fat coalesce, the entropy of the system goes UP, because the
>individual fat molecules in different droplets are free to change
>places. More importantly, the surface energy of all those acres
>of little droplets is reduced to that of a single bottle-sized
>interface. When there is an available energy source, a system
>can always become more ordered by dissipating that energy. The
>only mystery is why the milk stayed emulsified to begin with.

To begin with: Thank you for bothering to answer. I knew, one could argue
that the entropy of the 'whole system' is something else and milk is only
an emulsion which is unmixing anyway. I still took this example and and
the other one (dust cloud), so now we have examples to discuss. Why
discuss? Well, is this case entirely closed yet? Everyone happy with
entropy? A system loosing potential energy turned into nothing? One of the
best new books on biology (claiming that a combination of
shortage_and_surplus situation is responsible for great steps in the
evolution) tries to explain, how the development of high orders (like the
human brain) is actually raising the entropy level of the whole eco
system. That was about the only chapter I couldn't follow wholeheartedly.
Lets see...

>That's a result of the electrostatic repulsion between the polar
>ends of lecithin molecules at the interfaces, and would be
>overcome in time by random tunneling. Gravity just speeds up
>the natural de-emulsification.

Thus gravity is looked upon as enforcing the entropy of systems?

>> Look at how stars and galaxies form. Isn't that a lower state of
>> entropie wrt to the order of the previous dust cloud?
>

>No. This is a very common confusion, because gravity is the only
>example we have of a universally attractive force. In fact the
>clustered state of self-gravitating matter has _higher_ entropy
>than the diffuse state. If you leave stuff in a box for long
>enough, the baryons will all decay away (by virtual Hawking
>effects if not by GUTs or instantons), and you'll have a sea of
>equilibrium black-body radiation. But if you wait a REALLY long
>time, that radiation will all tunnel into a single black hole in
>equilibrium with its own Hawking radiation, and THAT configuration
>is overwhelmingly less likely to tunnel back, so that's where you
>spend most of eternity. Nearly all (and perhaps all) the arrows
>of time in the universe we see are a consequence of the universe
>having started in the tremendously low-probability state of
>{anything except one big mother black hole in equilibrium with
>its own Hawking radiation}. Clustering of stars and galaxies
>brings us closer to that high-entropy state, as does hydrogen
>burning in stars, radioactive decay, watching Hogan's Heroes
>reruns, and (probably) every other entropy-increasing event.

Ok, once again: when particles cluster you argue (and for sure it's the so
called commonly accepted version), that entropy has encreased. Is it b/c
the potential energy of those particles against each other has been lost
like the stone, that just fell onto earth has lost its potential energy?
For a layman it's a bit hard to understand, that on one hand entropy
should be responsible for the fact, that eg gases dissipate when set free
and on the other hand cluster. Let's say I'm aboard a starship somewhere
between galaxies. The airlock is filled with hydrogen. I open the airlock.
Ffffffth, the hydrogen is gone and will dissipate and spread out. Is this
an effect of entropy? Let's say I do this (spreading hydrogen) for quite a
while, ooops it'll cluster. It'll even heat up (the system can't possibly
'know', that if it reaches a certain point, hydrogen will 'burn' to reach
a higher state of entropy). So we have to different pictures: when gravity
forces are low - dissipation, and when they are high - clustering. I don't
claim that this contradicts entropy by any means, I just don't get the two
pictures to fit one.

>The big mystery, and probably the only one, is why the initial
>singularity of the universe is a low-entropy one. Some people
>distinguish a time-symmetric low-entropy final singularity as a
>gnaB giB, as opposed to a high-entropy final singularity, which
>they call a Big Crunch. If the universe is open, there is just
>a Big Bang, but if it's closed then we have to ask which sort of
>final singularity to expect. If the final singularity is a gnaB
>giB, it must be inconceivably far in the future, or we would
>find durable relics (abstemious little red dwarfs, longlived
>alien supercivilizations, etc.) living backwards around us today.
>Although an anti-burning star would be very hard to find, unless
>you knew where to look for the converging photons, there are
>other signs of a gnaB giB that would be hard to miss.

Agreed wholheartedly :-)

>My take on it is that the backwards shuffling thought experiments
>can all be resolved as follows: IF the atoms in this room are
>moving randomly, then they aren't all going to end up in that
>corner in the near future, and they aren't likely to have come
>from that corner in the near past. But the first clause contains
>a hidden fallacious assumption. The atoms in the room are in
>fact moving randomly, *subject to the constraint that their paths
>project back to a low-entropy singularity less than 10^18 seconds
>in a direction we shall call "the past"*.

Isn't that 'contradicting the meaning of 'randomness'? If something is
really _randomly_ moving, we shouldn't be able to project it's path
neither in the future nor in the past IMHO...

> If there were another
>such constraint anywhere near that close to us in the future, we
>would find it influencing all sorts of apparently random events.

.. See above :-)

My (laymans) thoughts about this 'cluster' around the following: If the
beginning was BB I visualize an incredible low state of entropy in terms
of 'particle pressure' vs. an incredible high state of entropy regarding
gravity. Now this balance swings like a pendulum. By spreading out
'particle pressure' (or say 'radiation pressure') decreases, whereas
potential energy is gained like you stretch a spring. The product of both
remain the same, as - unlike the perpendiculum - there is nothing outside
it, where energy could flow to (in terms of air friction and the like) and
thus is 'lost' to that system. Thus it should swing back once. And to. And
fro. And ....
Certainly generations of well eductaed physicists have thought this way.
What is the argument against it?

I hope I haven't annoyed no one with my laymans ramblings.

Cheerio

Benjamin J. Tilly

unread,
Apr 26, 1996, 3:00:00 AM4/26/96
to

In article <4lj3rh$n...@news.dtc.hp.com>
we...@dtc.hp.com (Greg Weeks) writes:

> : [Moderator's note: Well, I've read it. I found it very clear-headed,
> : which is why I recommend it. ... I think he does a decent job of
> : explaining how all perceived asymmetries boil down to just one --- the
> : low-entropy beginning of the universe.
>
> Uh oh. :-)
>
> I hope some of you will indulge me and consider a trivial thought
> experiment:
>
> Add some salt to a container. Then add pepper. Shake. The two spices
> mix together and then stay mixed.
>
> * Why does this happen? Well, it is hopeless to consider the motion of
> * the individual granules. But we can say that the arrangements of
> * granules with the spices significantly mixed outnumber the remaining
> * arrangements by a STAGGERING numerical factor. So it is EXCEEDINGLY
> * likely that the spices will end up significantly mixed.
>
> Now consider the same experiment from the time-reversed viewpoint. At
> first the container has thoroughly mixed spices. Then it is shaken.
> It is hopeless to consider the motion of the individual granules. We
> can say, though, that it is EXCEEDINGLY likely that the spices will
> remain mixed. BUT THEY DON'T.
>
> Does anyone out there agree that the starred argument is bogus and that the
> mixing of the spices remains mysterious?
>

Not I. The first point is that if you do something random, then the
spices will mix.

The BASIC point is that for some mysterious reason, what we have now is
extremely non-random.

Therefore running backwards in time you do something extremely
non-random (since you are moving towards a very non-random state) while
running forwards you do something which is much more random.

Like all explanations, this one reduces the problem to (hopefully) more
basic questions. Unfortunately in this case it has not been reduced
very much. AFAIK nobody has a really solid argument for why the present
universe is extremely non-random. But it is. And there are many guesses
for why it is true...

One far-out guess which I like is that an expanding fairly well-mixed
universe will, from general relativity, inherently reduce entropy. What
are some signs of this? The first is the fact that you are moving apart
objects which, because of gravity, want to move together. This results
in what in Newtonian physics is regarded as an undue amount of
potential energy, which is unstable and hence is low in entropy. A
second sign is that once bodies do attract, the continuing expansion
means that the background cools down even as the bodies heat up from
coming together. This heat difference can do work, and hence is again a
low state of entropy. Therefore the inflation inherent in the Big Bang
seems to me to be a potential source of a low-entropy state. I believe
that John Baez once told me that there was a flaw in this argument, but
I do not remember what it was, and I (obstinately) still like it. Since
I am a mathematician and not a physicist, remove the pepper from the
salt, take the salt, then re-read my theory. :-)

> When I read Price's 1st chapter I figured that _he_ at least would agree.
> However, note that the salt/pepper universe starts out with an amazingly
> low entropy; but the result is still unexplained: Why is it okay to reason
> from past to future but not from future to past? The low entropy at the
> "beginning" of the universe does not solve the mystery. If Price says it
> does, then maybe he _wouldn't_ agree that the mixing of the spices is
> mysterious.
>

It only "solves" it by reducing one unknown phenomena to another, in
this case the initially low entropy. However that is the nature of
explanation. Any answer to "why" can only reduce a question to more
basic "why" questions. In something like the physical universe, which
we lack absolute knowledge about, this chain of explanations that we
give always ends somewhere, and in a basic sense the question is
unanswered...

> Maybe I'm just nuts. I mean, I once told someone at a nighttime party that
> the stars you see looking up might actually be on the other side of the
> earth. I was drunk, though, and I noticed my mistake promptly. Still, it
> shows how low I can go. Am I doing it again? :-?

Not IMO. It seems to me that any explanation that reduces some
"obvious" fact of experience to some unfamiliar abstract idea is rarely
going to "feel" satisfying. And it is often hard to become so familiar
with the abstraction that the explanation is really going to feel
satisfying...

Ben Tilly


john baez

unread,
Apr 29, 1996, 3:00:00 AM4/29/96
to

In article <DqFru...@murdoch.acc.Virginia.EDU> sch...@howdy.wustl.edu writes:
>It seems highly unlikely to
>me that the process of mixing salt and pepper is influenced by the expansion
>of the universe, but then I am no cosmologist (nor a philosopher). :-)

To me it seems obvious. The expansion of the universe at time increases
is why the stars radiate energy outwards as time increases, which is why
the earth absorbs free energy from the sun as time increases, which is
why pepper trees and salt deposits form as time increases, which is why
we are able to pick pepper and dig up salt as time increases, which is
why we are able to put a bunch of pure salt next to a bunch of pure
pepper at the "beginning", rather than the "end", of this experiment.
All these arrows of time are correlated.

Note: by replacing the time coordinate t by -t, I could just as well
have said:

The expansion of the universe at time decreases is why the stars radiate
energy outwards as time decreases, which is why pepper trees and salt
deposits form as time decreases, which is why the earth absorbs free
energy from the sun as time decreases, which is why we are able to pick
pepper and dig up salt as time decreases, which is why we are able to
put a bunch of pure salt next to a bunch of pure pepper at the "end",
rather than the "beginning", of this experiment. All these arrows of
time are correlated.

This is just as valid as what I said before; I am just using different
coordinates. It's the correlation of the various arrows, rather the
direction of any one, which has coordinate-independent meaning.


Wayne Hayes

unread,
Apr 30, 1996, 3:00:00 AM4/30/96
to

In article <4m303i$e...@guitar.ucr.edu>, john baez <ba...@guitar.ucr.edu> wrote:
>In article <DqFru...@murdoch.acc.Virginia.EDU> sch...@howdy.wustl.edu writes:
>>It seems highly unlikely to
>>me that the process of mixing salt and pepper is influenced by the expansion
>>of the universe, but then I am no cosmologist (nor a philosopher). :-)
>
>To me it seems obvious. The expansion of the universe at time increases
>is why the stars radiate energy outwards as time increases

All your deductions seem reasonable to me except this one, and this is
the crucial one, I think. I don't see the connection between the
expansion of the universe and the fact that stars radiate energy
outward. In fact, I've *never* been able to see how the global
expansion or contraction of the universe can have any bearing
whatsoever on local events like which direction photons move with
respect to the star that generated them.

Time for a thought experiment. Let's pretend omega was bigger than
one. Furthermore, last Tuesday marked the date where the universe
stopped expanding and started contracting. What would we see? Well,
I think we'd still see lots of red-shifted galaxies. In fact, about
100 million years from now, galaxies further than 100 million light-years
would *still* be red-shifted, while galaxies less than 100 million
light-years away would be very slightly blue shifted. During that
100 millions year period, the Milky Way would continue to rotate
in the same direction, life would continue pretty much normal, stars
and rabbits would still be born, live, and die, in the same direction
of time as they are now. Entropy would continue to increase, and
photons would continue to radiate outwards from stars in forward time,
as the universe contracts in volume in forward time.

me...@cars3.uchicago.edu

unread,
Apr 30, 1996, 3:00:00 AM4/30/96
to

In article <1996Apr30.0...@jarvis.cs.toronto.edu>, wa...@cs.toronto.edu (Wayne Hayes) writes:

... snip ...


>--
> "Rent control is second only to bombing || Wayne Hayes, wa...@cs.utoronto.ca
> as a way of destroying a city." || Astrophysics & Computer Science
> -- Henry Spencer || http://www.cs.utoronto.ca/~wayne

It is not customary to reply to sig. files but I feel obliged to add a
correction. The effects of bombing can be faster mitigated. Rent
control is more destructive.

Mati Meron | "When you argue with a fool,
me...@cars.uchicago.edu | chances are he is doing just the same"

john baez

unread,
Apr 30, 1996, 3:00:00 AM4/30/96
to

In article <1996Apr30.0...@jarvis.cs.toronto.edu> wa...@cs.toronto.edu (Wayne Hayes) writes:
>In article <4m303i$e...@guitar.ucr.edu>, john baez <ba...@guitar.ucr.edu> wrote:
>>In article <DqFru...@murdoch.acc.Virginia.EDU> sch...@howdy.wustl.edu writes:
>>>It seems highly unlikely to
>>>me that the process of mixing salt and pepper is influenced by the expansion
>>>of the universe, but then I am no cosmologist (nor a philosopher). :-)

>>To me it seems obvious. The expansion of the universe at time increases
>>is why the stars radiate energy outwards as time increases

>All your deductions seem reasonable to me except this one, and this is
>the crucial one, I think.

I posted a correction to this on sci.physics.research, but it hasn't
showed up here yet, since the moderators are so lazy. I should have
said: "The expansion of the universe, starting from a low-entropy state,
is why the stars radiate energy outwards as time increases..." This was
implicit, since I earlier claimed that all the "arrows of time" can be
traced back to the fact that the universe began in a low-entropy state.
But I really should have said it more clearly!


Adam D. Jansen

unread,
May 1, 1996, 3:00:00 AM5/1/96
to

>I have a deep 'feeling' about the 2. law being incomplete or just one of
>the two eqn of the real set, b/c something obviously gets lost here (the
>order of things) but, unlike anywhere else in physics, it get's lost
>without a trace. Maybe the thought of conservation is engraved too deep in
>my head, but I just cannot think of something to get lost without
>something else to gain.

There is a broader conservation law. Theoretically, CPT should be
conserved. That is, the combined symmetry of charge conjugation, parity,
and time reversal should be conserved according to most theories. In
any reaction where time reversal symmetry is broken, CP should also be
broken to compensate. So that the combined symmetry CPT remains
conserved. However, the only thing that we know of which breaks the CP
symmetry is the weak nuclear force. I think that this means that there
should be some kind of connection between the weak force and the the
second law of thermodynamics.

Of course, this doesn't really answer the question. It just raises more
questions.


Have phun,
Adam D. Jansen
---------------------------------------------------------------------------
aja...@iastate.edu http://www.public.iastate.edu/~ajansen/
Iowa State University
We are Microsoft. You will be assimilated. Resistance is futile.
---------------------------------------------------------------------------

--
Adam D. Jansen


Adam D. Jansen

unread,
May 1, 1996, 3:00:00 AM5/1/96
to

John Baez

unread,
May 1, 1996, 3:00:00 AM5/1/96
to

I have permission from Greg Weeks to quote some email of his.

He writes:

> Since the goal is to understand the origin of the arrow of time, it seems
> like a good idea to avoid subjective theories of probability, because our
> subjective natures are mired in time's arrow. So I was naturally drawn to
> "frequentism". I presumed that all macro-observations at all times were
> available for analysis. For simplicity, let me call this "the view from
> nowhen".
>
> [It took me a while to realize the importance of a precise statement of
> statistical principles. Then I had to think about what mine were. My
> stat-mech courses did not cover Jaynesianism, Bayesianism, and
> frequentism.]

I'm glad you're willing to take a bit of a detour into the interesting
question of what probabilities actually mean! Unfortunately,
frequentism is so riddled with problems that most statisticians who have
thought about this much are Bayesians. There was a Bayesian/frequentist
war at some point in this century, and the Bayesians essentially won, as
far as I can tell. That's why I didn't approach the problem from a
frequentist viewpoint. Again I recommend that you take a peek at

http://math.ucr.edu/home/baez/bayes.html

which begins to explain Bayesianism and its advantages. Basically
Bayesianism prefers to *admit* what it argues is the *fact* that
probability is subjective. I include a bit of the above file at the end
of this post.

> Viewed from nowhen, it isn't immediately obvious at which time the
> micro-state is "randomly chosen". [I prefer "randomly distributed", but I
> am evidently a frequentist.] We need SOME assumption regarding the
> distribution of micro-states. In the salt-and-pepper discussion, the final
> state was declared to be not randomly chosen, because it was so close
> (time-evolution-wise) to a state of lower entropy. So the most obvious
> candidate for the time at which the micro-state is randomly chosen is the
> initial state.
>
> In other words, the statistical principle (in the view from nowhen) that
> leads to your line of reasoning is:
>
> In an experiment over some time interval, the end-point micro-state
> of lower entropy is (or at least may be viewed as) randomly chosen.
>
> This may or may not seem strange to you. I think it is a fair abstraction
> of what you and others said during the salt-and-pepper discussion. Again,
> we have to make SOME assumption about micro-state distributions; the
> simplest assumption is a random distribution at one of the end-points; and
> from your point of view, the higher-entropy state is precluded by its
> unnatural proximity to a lower-entropy state.
>
> I hope you accept the above description as a (frequentist) formulation of
> how you reasoned about the salt-and-pepper experiment. If so, I'm done for
> now. [I don't really like the above principle. Indeed, my intuition says
> that there is still something amiss. But rationally, so far as I can see,
> my discomfort is just a matter of taste.]
>
> How's that sound?

I agree that the principle you enunciate comes very close in practice to
getting the results I get. But I prefer to use a different principle. I
don't want to pick nits here. But I like a pure Jaynesian formulation
better, which is: describe the state of a system using the mixed state
with the highest entropy consistent with what you know about it.

Let me reconcile this with your proposal.

Suppose, as you propose, I was "macroscopically omnisicient" and knew
the values of a bunch of macroscopic observables over the whole history
of a system (say the universe, to be grandiose). Suppose I followed the
Jaynesian prescription. What mixed state would I get? Well, if the
history had one "low-entropy end" and one "high-entropy" end, the
macro-observations at the low-entropy end would constrain the
micro-state so much more than those at the high-entropy end that the
result would be very similar to what happened if I followed your
prescription and simply *discarded* my knowledge of the
macro-observations at the high-entropy end of the universe, and chose a
mixed state of maximal entropy subject to the constraints imposed by
my macro-observations at the low-entropy end.

Note: I think "choosing a mixed state of maximal entropy subject to the
constraints imposed by my macro-observations at the low-entropy end" is
just my pedantic way of saying "the end-point micro-state lower entropy
is (or at least may be viewed as) randomly chosen." I prefer to speak
of a mixed state rather than a randomly chosen micro-state, but I regard
them as different ways of talking about the same idea.

The Jaynesian viewpoint is that it's wasteful to arbitrarily discard
information gained by macro-observations near the high-entropy end.
However, because it is the high-entropy end, the amount of information
gained at that end is so pitifully small that discarding it as you
propose often has little effect in practice.

Let me consider an example, just for my own entertainment.

For example, say we have 1000 stones in a line, 500 white and 500 black,
We treat all white stones as indistinguishable and all black stones as
indistinguishable. There are thus 1000!/(500! 500!) microstates. Say
we start with all white stones at the left and all black ones at the
right, and then repeatedly do some deterministic process that tends to
mix them. Say we do "macro-observations" as follows: we divide the line
up into 10 segments. each with 100 stones in it, and measure the total
number of white stones in each segment. At the beginning of the game
our macro-observation is thus

100 100 100 100 100 0 0 0 0 0

Let's say at the end we perform another macro-observation and get

50 50 50 50 50 50 50 50 50 50

Our initial macro-observation completely determines the micro-state!
This observation thus gives us

log_2 (1000!/(500! 500!)) = 994.61

bits of information.

Our final macro-observation is consistent with many micro-states.
In each segment there are 50 white stones and 50 black ones, so there
are 100!/(50! 50!) possibilities. There are thus (100!/(50! 50!))^10
total possibilities. This observation thus gives us a measly

log_2 ((1000!/(500! 500!)) / (100!/(50! 50!))^10) = 31.2

bits of information. So it contributes little knowledge of the
micro-state compared to the initial observation.

But how much do we actually lose by ignoring this final
macro-observation? In fact, in this extreme example, the initial
macro-observation COMPLETELY DETERMINED the micro-state, and thus the
final observation contributes no *new* information. So we don't lose
ANYTHING by ignoring it. Of course this is extreme, but it serves to
illustrate my point. Typically macro-observations on the high-entropy
end contribute less information. Disregarding them changes the answers
to many (but not all) problems rather little.

Since I am pushing Bayesianism and Jaynsianism, I should note that I am
computing these things using the prior assumption that all permutations
of the stones are equally likely. No computation of probability without
a prior! That's the irreducible subjectivity of probability proclaimed
by the Bayesians.

Okay, here's that stuff from my web page I mentioned. For the rest, see
the web page.

From galaxy.ucr.edu!library.ucla.edu!agate!overload.lbl.gov!lll-winken.llnl.gov!uwm.edu!news.alpha.net!news.mathworks.com!udel!gatech!mailer.acns.fsu.edu!news.scri.fsu.edu!ibm7.scri.fsu.edu!youssef Fri Nov 11 12:40:06 PST 1994
Article: 28918 of sci.physics
Path: galaxy.ucr.edu!library.ucla.edu!agate!overload.lbl.gov!lll-winken.llnl.gov!uwm.edu!news.alpha.net!news.mathworks.com!udel!gatech!mailer.acns.fsu.edu!news.scri.fsu.edu!ibm7.scri.fsu.edu!youssef
From: you...@ibm7.scri.fsu.edu (Saul Youssef)
Newsgroups: sci.physics
Subject: Re: Many-Worlds FAQ
Date: 11 Nov 1994 06:43:01 GMT
Organization: Supercomputer Computations Research Institute
Lines: 196
Distribution: world
Message-ID: <39v3pl$5...@news.scri.fsu.edu>
References: <1994Nov8.1...@oracorp.com> <39ommc$l...@galaxy.ucr.edu>
NNTP-Posting-Host: ibm7.scri.fsu.edu

|John Baez writes:
|
|This is a really crucial issue, and it's probably due to long
|discussions with Daryl that I evolved my current position on this issue.
|I do *not* side with the - no doubt mythological - "Everettistas"
|described below:
|
|>Here is a sample conversation between two Everettistas, who have fallen
|>from a plane and are hurtling towards the ground without parachutes:
|
|> Mike: What do you think our chances of survival are?
|
|> Ron: Don't worry, they're really good. In the vast majority of
|> possible worlds, we didn't even take this plane trip.
|
|Part of the point of Bayesianism is that you start with a "prior"
|probability measure. Let me just call this the "prior" --- I think
|Bayesians have some bit of jargon like this. (I wish some expert on
|Bayesianism would step in here and give a 3-paragraph description of its
|tenets, since I feel unqualified.)
|
:-). I have found this point of view to be very helpful for getting
a better understanding of quantum mechanics and even for understanding
why people argue about it so much. In fact, the spectrum of interpretations
in quantum mechanics has a close analogue in probability theory.
The "wave function is real" view is analogous to the "frequentist" view of
probability theory where probabilities describe "random pheonomena" like
rolling dice or radioactive decays and the "wave function represents what
you know about the system" view is analogous to the Bayesian view where
probability is just a consistent way of assigning liklihoods to propositions
independent of whether they have anything to do with a "random process." Just
as in quantum mechanics, arguments have raged for many (more than 100) years
without any real resolution and, just as in quantum mehcanics, when the
two camps actually solve the same problem, the mathematics is basically
the same. A typical example of this sort of disagreement is Laplace's
successful calculation of the probability that Jupiter's mass is within
some interval. To a frequentist, the mass of Jupiter is a number.
Admittedly, this number is unknown, but it is definitely not a random
variable (since there is no "random process changing Jupiter's mass")
and so it is utter nonsense to talk about it's p.d.f. This may seem
like a silly kind of disagreement, but the consequences in terms of
what problems can be solved and in terms of understanding what probability
theory is all about couldn't be greater.

Most people are more familiar with the frequentist view where you
say that if you perform a "random experiment" N times with n successes
then the "probability of success" is the large N limit of n/N. You then
assume that these probabilities obey Kolmogorov's axioms, and you're all
set. The rest of probability theory is solving harder and harder problems.
The Bayesian view of things is a bit different and starts this way. Suppose
that we want to attach a non-negative real number to pairs of propositions
(a,b) and this number is supposed to somehow reflect how likely it is that
"b" is true if "a" is known. Let me write this number as a->b [*].
For "->" to be a useful likelihood measure one expects a few modest things
of it. For example, if you know a->b, this should determine a->.not.b and
the procedure to get from a->b to a->.not.b shouldn't depend on "a" or "b."
It turns out that this and just a little bit more is enough to entirely fix
probability theory, as shown in an obscure paper by Cox in Am.J.Phys.
in 1946. One gets

(a -> b.and.c) = (a -> b)(a.and.b -> c)
(a -> b) + (a -> .not.b) = 1
(a -> .not.a) = 0

which is the Bayesian form of probability theory. You can then trivially
show (Bayes Theorem) that

(a.and.b -> c) = (a->c) {(a.and.c -> b)/(a->b)}

if (a->b) is nonzero. This is often used in the following context:

a = "stuff that you know"

b = "more stuff that you found out"

c = "something that you're interested in"

Then if you already know (a->c), Bayes theorem tells you how to find the
probability that c is true, given your additional knowledge b, i.e.
(a.and.b -> c). For example, suppose that you happen to know that the
behavior of a random variable x obeys one of a family of pdfs f(x,t)
where t is some unknown parameter. Given a sample of independent x values
X = (x1,x2,...,xn), what can you say about t? Using Bayes theorem,
it's easy like pie. If "e" is the initial knowledge of the experiment as
just described, then you want to calculate

(e.and.X -> t) = (e->t) {(e.and.t->X)/(e->x)}

Here (e->t) is called the "prior" probability that the true pdf is f(.,t).
If we have no reason to prefer one value of t over another, we can use
the "uniform prior" (e->t) = const. Then, since (e.and.t->X) =
(e.and.t->x1.and.x2.and.x3...xn) = f(x1,t)f(x2,t)...f(xn,t),

(e.and.X -> t) = const. Prod{j=1,n} f(xj,t)

and you're done. This is usually called the likelihood method. There are
more sophisticated methods for chosing priors in various situations
(e.g. "Maximum Entropy") but the basic idea is the same.

So far, I have left out one important point. In the frequentist
view of probability you start of assuming that probabilities have a particular
frequency meaning. In the Bayesian view, this must be derived by
considering copies of a single experiment and considering the probability
that n/N of them have success. You can then get the standard frequency
meaning of probabilities provided that you assume (roughly) that
probability zero events don't happen.

Note that because of this frequency meaning, probability theory is
not just a piece of mathematics. It is really a physical theory about
the world which might or might not be correct. From this point of view, it
it tempting to try to explain quantum phenomena by modifying probability
theory. As far as I can tell, this idea actually works and has more
consequences than "just another interpretation" of quantum mechanics.

|
|Bayesianism is called "subjective" in that it applies no matter how you
|get your prior. In other words, you could be a pessimist and wake up in
|the morning assuming that sometime today a nuclear attack will
|devastate your town, and constantly be surprised as each hour goes by
|without an attack. This might or might not be smart, but if you are a
|good Bayesian you can correctly compute probabilities assuming this
|prior.
|
That's right. Of course, you can get the wrong answer if you
have the wrong prior, but this is viewed as a progress! From the
Bayesian point of view, science progresses by finding out that your prior
isn't working. For example, your prior may include a physical theory that is
wrong.

|
|When you compute probabilities, however, you don't just compute
|"straight" probabilities with respect to the prior, you also compute
|*conditional* probabilities.
|
In the Bayesian view, all probabilities are conditional since they
all depend on what you know. This is also true in Kolmogorov's system
but only within a fixed sample space.

|
|Note: if you want, you can think of this process of switching from
|computing probabilities using the prior to computing conditional
|probabilities as a mysterious PHYSICAL PROCESS - the "collapse of the
|wavefunction". This would be wrongheaded, because in fact it is simply
|a change on *your* part of what you want to compute! If you think of
|it as a physical process you will be very mystified about things like
|when and how it occurred!
|
Yes! As I've said, probably too many times, it's like wondering what
physical process causes a probability distribution to "collapse" when you
flip a coin.
|
|Note that anyone who acted that way would be silly, and that their error
|would have little to do with QUANTUM MECHANICS, but mainly with
|PROBABILITY THEORY. Probability theory is the special case of quantum
|mechanics in which ones algebra of observables is commutative. (This
|becomes a theorem in the context of C*-algebra theory.)
|
Could you post or email me the reference for this theorem?
|
|Now I admit that this view of quantum mechanics takes a while to get
|used to. In particular, there really *are* issues where quantum
|mechanics is funnier than classical probability theory. In classical
|probability theory there are pure states in which *all* observables have
|definite values.
|
Notice that a statement like: the coin is in "state" (1/2,1/2) would
be very bad language from the Bayesian point of view since (1/2,1/2)
represents what you know and not some physical property of the coin.
One of the reasons that this point of view "takes getting used to" in
quantum mechanics is that the language of standard quantum theory constantly
reinforces the idea that Psi is the "state of the system."
|
|Subconsciously we expect this in quantum mechanics,
|even though it's not so. So we always want to ask what's "REALLY going
|on" in quantum mechanics --- meaning that we secretly yearn for a
|wavefunction that is an eigenstate of all observables. If we had such a
|thing, and we used *it* as a prior, we wouldn't need to worry much about
|conditional probabilities and the like. But alas there is no such
|thing, as far as we can tell.
|
That would be like yearning for the "true probability distribution" for
coin flippage. But the fact that there isn't any such thing independent
of your state of knowledge doesn't mean that their isn't something REALLY
going on (e.g. a REAL copper penny being flipped by a real human being).
In spite of non-commuting observables and Bell's theorem and all it's
variations, I don't think that it has quite been shown that there can't
be something "REALLY goin on", as you say.

By the way, Ed Jaynes is writing a book on Bayesian probability
theory which is easily readable by undergraduates. For some reason, it's
current draft is available on www at
http://www.math.albany.edu:8008/JaynesBook.html
Jaynes is a very interesting guy and his stuff is always worth paying
attention to.

[*] (a->b) is often written P(b|a).

Greg Weeks

unread,
May 3, 1996, 3:00:00 AM5/3/96
to

John Baez (ba...@math.ucr.edu) wrote:
: [A LOT OF INTERESTING STUFF THAT WILL TAKE A WHILE TO DIGEST]

Meanwhile, my original question (repeated below) seems to be unanswered.
(The answer may lie in the referenced materials. Then again, maybe not.)

: Suppose, as you propose, I was "macroscopically omnisicient" and knew


: the values of a bunch of macroscopic observables over the whole history
: of a system (say the universe, to be grandiose).

[Actually, I'd rather not be grandiose. I'd rather start with a small,
practical (?) experiment. This allows me to consider an ensemble of
systems, which I'd prefer at this point.]

My original question was: What statistical principles would you use to see
if the observed values are in agreement with a particular micro-physics
theory?

In the case of a micro-observed system, one answer was to discard your
knowledge except for at one time, propagate the micro-state at that time
using the laws of micro-physics, and compare the propagated results with
the observed results.

The analogous macro-physics answer is to discard your knowledge except
for at one time, propagate the macro-state at that time using the laws
of micro-physics AND an assumption about the probable distribution of
micro-states in the macro-state, and compare the propagated results with
the observed results.

However, this macro-physics answer does not work in general. Indeed, I
believe that it works in practice only when you propagate the selected
macro-state in the temporal direction of increasing entropy. Therefore,
we are dangerously close to PRESUMING that time has an arrow as one of
our fundamental statistical principles.

There are various ways of avoiding this unpleasant presumption:

1. I think I could avoid the presumption in the above context, although
the result may or may not not be pretty.

2. You could change the above context by avoiding the macroscopically
omniscient view (the view from nowhen). Unfortunately, I don't think that
I could follow a discussion of the origin of time's arrow that avoided the
view from nowhen. I need it.

3. You could change the above context by using an alternative statistical
principle to see if the observed values are in agreement with a particular
micro-physics theory. (No alternative principle occurs to me, but that's
just me.)

I'm hoping for #3, but I can be content with #1.


Greg Weeks

Dien Alfred Rice

unread,
May 3, 1996, 3:00:00 AM5/3/96
to

In article <DqFBC...@murdoch.acc.Virginia.EDU>, we...@dtc.hp.com (Greg Weeks) writes:
> Alex Kasman (kas...@alpha.math.uga.edu) wrote:

[...]

> : So, here is what I think the second law of thermodynamics should say:


> :
> : We don't know if entropy will increase or decrease, but if you have to
> : bet, bet on an increase.
>

> This is certainly correct in practice. But I was mostly concerned with
> _why_ it is correct -- eg, why is the asymmetric belief that you mention
> above preferable to its time reverse? And that remains a mystery.

My own thinking on this topic goes something like this....

"Psychological" time -- the human perception of time -- is directly
related to the 2nd law of thermodynamics. As "machines" that
keep trying to decrease our own entropy, by increasing the entropy
of our environment (as all living things do), we need low entropy
systems as our source of energy (in our case, the relatively low
entropy of a concentrated sun, as opposed to the sun's energy being
spread out all over the place).

Our own need of low entropy systems as an energy source is somehow
related to how we perceive "time" as going from a low entropy
beginning to a high entropy end -- the 2nd law of thermodynamics.

What this comes to then is this:

1. If the universe was at a steady-state -- high entropy at the
beginning and end, we could not exist, since we would have no
energy source for our own "self entropy-reducing" activities.

2. If the universe was high entropy at the "beginning" and
low entropy at the "end" of time, then our psychological
perception of time would be backwards. We would always see
entropy increasing, since our biology is directly related
to our decreasing our own entropy. We need a low entropy
environment for us to be able to keep decreasing our own
entropy -- the environment has to be able to provide an
energy source.

3. The possibility as we see it now, with low entropy at the
"beginning" and high entropy at the "end." The psychological
effect, I think, would be the same as # 2.

So, we need to have high entropy at one end of "time," and
low entropy at the other end, for us to even exist. Because
we need an energy source, I think we will always perceive
psychological time as going from low entropy to high entropy.


If you think these are good ideas, I will need a job after my
Ph.D. Anyone want to hire me? :) This isn't my field
(I do theoretical quantum optics), so my apologies if these ideas
are not yet well developed (though someone else may have
developed them more than me without me knowing of it). I wouldn't
mind thinking more about these issues as a post-doc, though. :)

All comments appreciated.


Cheers,

Dien Rice


john baez

unread,
May 3, 1996, 3:00:00 AM5/3/96
to

In article <4m303i$e...@guitar.ucr.edu> ba...@guitar.ucr.edu (john baez) writes:
>In article <DqFru...@murdoch.acc.Virginia.EDU> sch...@howdy.wustl.edu writes:
>>It seems highly unlikely to
>>me that the process of mixing salt and pepper is influenced by the expansion
>>of the universe, but then I am no cosmologist (nor a philosopher). :-)

>To me it seems obvious. The expansion of the universe as time increases
>is why the stars radiate energy outwards as time increases....

Sorry!!! I left out something really important. The expansion of the
universe *starting from a low-entropy state* is why the starts radiate
energy outwards as time increases....

Thanks to someone for bringing my attention to this via email.

Bill Taylor

unread,
May 7, 1996, 3:00:00 AM5/7/96
to

Time to throw a couple of old posts of mine into the pot. This thread
recurs regularly, like a dose of malaria, and these were written the last
couple of times it did.

They have been marginally edited, mainly to seam into the current discussion.

Bill.
=========================================================================

People have said that the initial (low-entropy) conditions are the "real"
reason for the thermodynamic arrow of time.

In a way, I guess this is so. But "causes" and "reasons" are fairly fuzzy
things at the best of times, and this answer always strikes me as a bit
unsatisfying.

It's a bit like saying...

"Why did the ball roll down to the bottom of the hill?
- ANSWER: coz it started at the top!"
~~~~~~~~~~~~~~~~~~~~~~~~~
Most people would prefer to say something about gravity. Both answers are
correct as "causes" or "explanations", but the second is more satisfying.
(Is this the difference between "formal causes" and "efficient causes" ??)

So anyway, it seems to me that the answer - "initial low-entropy conditions"
to the question "why is the arrow of time", is a bit unsatisfying like this.

What then, could be a "mechanism-like" answer to the question; in the sense
that "gravity" would be an answer to the ball-on-the-hill question?

My naive answer is, that it comes about because of the ESSENTIAL INDETERMINISM
in the QM universe. I posted details on this a while ago, showing how such
indeterminism could do it; [re-posted in the next section]. At that time,
I got a fair old rubbishing from people who didn't actually believe there
*was* any essential indeterminism in the universe; so my ideas were rather
by-passed as the discussion veered to that track.

I was rather naive then, not just about physics (which I still am), but also
about physicists (which I've learned a bit about). I naively assumed that
almost all physicists were content with the notion of essential indeterminism
in the world; almost all basic texts and popularizations seem to assume it as
a matter of course. I suspect that the great majority of working physicists
still do accept it. However, I'm now aware that the majority of sci.physics
(or at least the vocal majority) deny this. They are firm believers in
determinism, Everett style. Zeh's book was widely quoted; and he too, (though
not openly an Everettist) has rather snide remarks to make about indeterminism.
So then - to you determinists, whether Everettist or not, (if you're still
reading):- I can produce no real response, other than the simple entropic
explanation already given, which relies on a rather observer-dependent
attitude towards accumulating microstates.

However - to non-determinists:- I say there is a more detailed explanation
available; namely that this very non-determinism produces a kind of
micro-scale "damping", as I illustrated with my example of rocks falling
into sludgy pools, [reposted in the next section].


It's often struck me that one of the chief attractions of Everettism is its
resuscitation of full determinism - something a lot of people seem desparately
keen to have around, (even though we all know in our hearts, from our daily
lives, that it just isn't true! ;-) ). Indeed, Everett honestly and blatantly
says as much himself, in "Theory of the Universal Wave Function", VI(d):-

.. There can be no fundamental objection to the idea of a stochastic theory,
.. except on the grounds of a naked prejudice for determinism.

What a giveaway!!

My own prejudices are for essential non-determinism; it helps "explain" free
will (time to recusitate an old sig of mine! [at end]); and it also helps
explain the arrow of time, and in a way that seems most satisfying to me.

=============================================================================

>Why is the future different from the past?

I'm glad John Baez has got around again to a post concerning this; it gives
me an opportunity to make a jackass of myself, and post my naive views on the
matter, in the hope that someone will shoot them down (without being too
technical), and I may learn something.

The question is, what *is* irreversibility, and where does it come from.
We all know it exists, of course, as is clear from the fact that we can
make permanent records, remember things, things cool down, and so on.
John earlier noted well that many an explanation is...

>not especially enlightening, in fact, it's downright circular.

Hopefully, this will not apply to my naive thoughts; but if so, no doubt I'll
soon be told. But for many years I've been aware that physicists often
get their knickers in a twist over this, and I've never really been sure why.

So; irreversibility. If we run a film of almost anything backwards, it mostly
quickly becomes clear that this is so. Why? In the real world, for instance,
we might often see rocks falling off hillsides, and spludging down into
mudflats. But we never see circular impulses of mud converging in, and
just happening to hurl a rock up to lodge onto a hillside nook. Why not ??

John gave a nice account of Boltzmann's attempted explanation, noting it was..

>a rather desperate solution to the problem of temporal asymmetry

...and suggesting that it is a..

>great achievement of modern cosmology has been to offer us an alternative

Maybe so. But it seems a long way to go when there may be a quicker route.
Is it *really* believable that the early (or final) conditions of the
universe explain why rocks never hop up out of mud pools ? (Apart of course
from the fact that strict boundary conditions may be needed to ensure that
the rocks and pools are there at all!) But given rocks and pools, it's hard
to credit that early cosmological conditions have anything further to do with
them. To the naive observer like me it seems "a folly without warrant", almost.

So why do rocks not fly up ? The 2nd law of thermodynamics is often invoked,
and this is obviously connected, but it strikes me as being unsatisfactory
as an *explanation*. Rather, it is just another way of re-describing
irreversibility in a certain context. To use it as an explanation is (IMHO)
what John dubbed "downright circular".

So leaving aside the macro-statistical, and the cosmological, where do we
look to deal with the nasty fact that...

>temporal asymmetry is not explicable ... by a time-symmetric physics.

And in particular, *local* temporal asymmetry.

This is the key, that gets the knickers in a twist. Physics is said to be
"time-symmetric". The equations of physics, whether Newton's, or Maxwell's,
or Schrodinger's, or whatever, are always noted to be time-symmetric; and
this is no doubt true. But I would humbly submit that the equations of
physics are NOT ALL of physics. There IS more to physics than just the
equations, and the extra is clearly *not* time-symmetric. Now we don't need
any fancy new mystical ideas here, just standard boring old college physics
and its QM underpinnings.

Irreversibility may not be apparent in the various standard equations of
physics; but surely it's due to the quantum-mechanical origin of a form of
*damping*, or hysteresis? Irreversible damping is known to us all, but
seemingly does not arise from the time-symmetric laws mentioned above.
And yet damping is almost universal, and is clearly an irreversible process,
or rather ensures that many other processes *will* be irreversible.

So one is forced to conclude that the equations mentioned above are not
"all" of physics, in some sense. And surely isn't this already well-known ?
"Damping" of a sort is already present in the microscopic events of QM.

The standard interpretation of QM provides for this fact, surely ? That is,
the interpretation of QM based on essential indeterminism. It is this
essential indeterminism that lies at the heart of time asymmetry, IMHO.

Rocks don't pop up, because if we stopped the universe(!!) shortly after one
had flopped down, and reversed every particle/photon's motion, (oh well it's
only a thought experiment), we would *not* see the exact reversal. Sure, it
would start out exactly reversed, but then little errors would creep into
the scenario, as QM-damping took its inevitable accumulating toll. If we
stopped things while the rock was falling, then reversed, we might just
see it go back up on its ledge; but if we let it hit the swamp and let the
waves die down, then reversed things, no way could it get back up.

Why ? Because of this basic quantum irreversibility. It happens virtually
any time there is degradation due to heat radiation, and elsewhere too.

The rock falls; the energy is converted to sound and mud waves, these die
away into heat, which is radiated away. Heat? The thermal motion of the
molecules is reduced by knocking together:- these knocks occasionally send
an electron up to a higher orbit (reducing the thermal recoil); the electron
soon drops down again; releasing a photon. Now we reverse all this. The
photon comes in, knocks the electron back up to... NO IT DOESN'T ! This
is already a critical point. There is *no* guarantee that the photon coming
back exactly to where it was created, would be absorbed by the electron
(now with the opposite momentum). It MIGHT be, but most likely not. These
interactions have an irreducible probabilistic component, as I understand it,
which is not intrinsically part of Schrodinger's equation, but just derived
from it via the amplitude. This real amplitude of the complex wave function
in some way produces time asymmetry, even though the function itself
satifies the time-symmetric equation.

So anyway; most of these "reversed" photons will miss their "proper" electrons;
though soon hit something else of course - the mud would re-heat a little,
but would *not* regather a converging wave to hurl out the rock.


So there is my naive view, at overly boring length. It is these micro-events
everywhere, all the time, that produce irreversibility, that produces the
damping and the time-asymmetry. Yes, the equations governing the events are
symmetric, but the quantum uncertainties involved ensure that the events
themselves will not (usually) reverse exactly. Thus we get damping, friction,
cooling, thermodynamics, permanent records, memories, and all the other
macroscopic paraphernalia of an irreversible universe.

So my puzzlement at why physicists so often get their knickers in a twist
comes down to this - why is it not clear that the equations of physics
are not all the physics "there is"? That something else, essentially
irreversible, (unlike Newton/Maxwell/Schrodinger), is also needed, and
that we already know what it is - essential quantum uncertainty.

Simply put; physics is *not* time-symmetric.

-------------------------------------------------------------------------------
Bill Taylor w...@math.canterbury.ac.nz
-------------------------------------------------------------------------------
Galaxies - results of chaotic amplification of quantum events in the big bang.
Free will- the result of chaotic amplification of quantum events in the brain.
-------------------------------------------------------------------------------

john baez

unread,
May 7, 1996, 3:00:00 AM5/7/96
to

In article <4mgh78$n...@gap.cco.caltech.edu> gott...@cco.caltech.edu (Daniel Gottesman) writes:

>I agree that this is one of the more confusing and important issues associated
>with the arrow of time. Here's my answer:

>The universe actually is starting out in a high entropy state, relative to
>its size at that time. However, as the universe expands, the entropy per
>comoving volume is constant, but the physical volume increases. This means
>the maximum *possible* entropy per comoving volume increases as well.
>The gas is then overwhelming likely to evolve towards a new
>high-entropy state. The Hubble expansion turns high entropy into low
>entropy! When people talk about a "low-entropy" beginning to the
>universe, they are figuring in the
>effects of curved space-time in a poorly-defined manner. This argument
>tends to suggest that entropy will always "decrease" towards singularities,
>which sort of answers the question of the origin of the arrow of time in
>an open universe, but leaves it as murky as ever in a closed universe.

>There is another complication, I think, related to formulating statistical
>mechanics on gravitationally bound systems, which are unstable to collapse.
>I'm not sure how important that is for this.

This sounds good to me as far as it goes. The place where things start
getting complicated and confusing to me is when I really try to take the
gravitational effects seriously. I believe there is already an
incompatibility between standard statistical mechanics and Newtonian
gravity, because the notion of Gibbs state (equilibrium state) requires
that the Hamiltonian be bounded below. As Gottesman notes, in Newtonian
gravity a system can keep reducing its energy by collapsing more, so
there is fundamentally no such thing as an equilibrium state, only
metastable states.

When you try to treat gravity using general relativity, things get even
more slippery. Here there is not even any Hamiltonian, in general,
because general relativity is not formulated as a dynamical system with
respect to a pre-established notion of time. (In asymptotically
Minkowskian universes one may define a Hamiltonian by reference to
clocks at infinity, but the big bang models we are discussing are not
asymptotically Minkowskian.) Rovelli has emphasized that the problems
of combining general relativity and statistical mechanics are very much
akin to the problems of combining general relativity and quantum
mechanics. This is what makes phenomena that involve general
relativity, quantum mechanics AND statistical mechanics so fascinating,
such as Hawking radiation. Clearly there are some big things waiting to
be understood in this direction.

We can avoid these complications by massively simplifying the problem.
Suppose that universe starts out as a bunch of hot plasma compressed
inside an enormous insulated piston. Describe gravity classically in
terms of a force governed by a potential like

V = -m/r r > L L = Planck length
-m/L r < L

This makes the energy bounded below so statistical mechanics works just
fine.

As Gottesman notes, the available phase space (consisting of the states
of energy E) will expand as God, or his henchman Lucifer, pulls out the
piston and lets the universe expand. More precisely, the universe will
do work on the piston and lose energy, but the configuration space of
each particle will grow in proportion to the volume, so the total
available phase space should grow.

Before the piston is pulled out, let's say the system starts out in
thermal equilibrium at a high temperature. It is thus in a state
minimizing the free energy E - TS, and T is large so the entropy will be
high, at least for the available phase space. If it is pulled out very
slowly (adiabatically) the universe will stay in equilibrium at all
times. But in a big-bang-like scenario, the piston is pulled out
suddenly, so the universe never quite has a chance to come into
equilibrium. As the plasma cools, first nuclei form, then hydrogen and
helium gas, and as the temperature decreases further the gas begins to
clump up due to gravity, galaxies and stars form, the stars radiate
energy into the now chilly interstellar space, planets form, plants
grown on the planets thus exploiting the fact that the radiation emitted
by the stars is not in thermal equilibrium with interstellar space,
physicists are born, etc.. At all times the universe is "struggling to
catch up" with ever-increasing phase space; it's entropy is always
increasing, but always low compared to what it could be for the given
volume.

This scenario seems to capture some features of our universe, but it
doesn't do justice to the really interesting things about gravity. For
example, it appears that entropy is maximized when most of the matter
around is in the form of black holes. (This gets us into the
problematic realm of general relativity and statistical mechanics, which
I can't really do justice to, so take this with a grain of salt.) This
is why I spoke of a "low-entropy homogeneous initial state" for the
universe: even though a universe of homogeneous hot plasma has rather
high entropy compared to some possibilities, it appears to have low
entropy compared to a universe full of black holes. In fact, Penrose
has done calculations along these lines and shown that the entropy is
really VERY low compared to the universe full of black holes.

There are a lot of other complicating factors when you try to take
gravity into account, too, and I have never felt I came close to taking
them all into account.

John Baez

unread,
May 8, 1996, 3:00:00 AM5/8/96
to

So it seems Weeks is assuming the cylinder is perfectly insulated, and
the way the piston is pushed back in is precisely a time-reversed
version of how it is pulled out. If so, we have a system isolated from
the arrow of time of the universe at large, presumably satisfying
time-reversible laws, and subject to time-symmetric external influences
(the position of the piston). Yet he is claiming that it behaves in a
time-asymmetric manner? That does make a nice puzzle.

Bill Taylor

unread,
May 8, 1996, 3:00:00 AM5/8/96
to

ba...@math.ucr.edu (John Baez) had this tucked away in one of his long writes:

|> frequentism is so riddled with problems that most statisticians who have
|> thought about this much are Bayesians.

Not so much riddled with problems, as that it is not a *single* approach to
statistics, as Bayesianism is. "Frequentist" (better, "orthodox") stats is
a hodge-podge of different methodologies and approaches. Some are good here,
some there. One or two are even riddled with problems.

That's why a lot of Bayesians like their approach - its uniformity. (Here's
a case where improper apostrophization would have a serendipitous result:
That's why a lot of Bayesians like their approach - it's uniformity.) :)

Mind you, that very same reason puts off a lot of folk - a uniform approach
easily tends to religious fervor!

|> There was a Bayesian/frequentist
|> war at some point in this century, and the Bayesians essentially won,

OY! I think not. I. J. Good, one of their prophets, used to speak of
"a Baysean 21st century". He looked to the inevitable time when all the
frequentists had died off, leaving Bayesians in possession of the field.

However, even my most fervently Bayesian colleagues (and we are a hot-bed here
at Canterbury NZ) would sadly admit that they are still a minority. And I
think all but the really evangelical would admit there will never be a complete
victory, even in the 21st.

But it's very dangerous to predict; especially the future...

-------------------------------------------------------------------------------
Bill Taylor w...@math.canterbury.ac.nz
-------------------------------------------------------------------------------

Classical: "now" is the last instant of the past.
Quantum: "now" has a short but nonzero duration - namely the extent
of future which will be collapsed by the next observation.
-------------------------------------------------------------------------------


mathwft_math_c...@p1.f98.n12.z1.gryn.org

unread,
May 8, 1996, 3:00:00 AM5/8/96
to

From: mat...@math.canterbury.ac.nz (Bill Taylor)
Subject: Re: This Week's Finds in Mathematical Physics (Week 80)
Organization: Department of Mathematics and Statistics,
University of Canterbury, Christchurch, NZ.

ba...@math.ucr.edu (John Baez) had this tucked away in one of his long writes:

|> frequentism is so riddled with problems that most statisticians who have
|> thought about this much are Bayesians.

Not so much riddled with problems, as that it is not a *single* approach to


statistics, as Bayesianism is. "Frequentist" (better, "orthodox") stats is
a hodge-podge of different methodologies and approaches. Some are good here,
some there. One or two are even riddled with problems.

That's why a lot of Bayesians like their approach - its uniformity. (Here's
a case where improper apostrophization would have a serendipitous result:
That's why a lot of Bayesians like their approach - it's uniformity.) :)

Mind you, that very same reason puts off a lot of folk - a uniform approach
easily tends to religious fervor!

|> There was a Bayesian/frequentist


|> war at some point in this century, and the Bayesians essentially won,

OY! I think not. I. J. Good, one of their prophets, used to speak of

"a Baysean 21st century". He looked to the inevitable time when all the
frequentists had died off, leaving Bayesians in possession of the field.

However, even my most fervently Bayesian colleagues (and we are a hot-bed here
at Canterbury NZ) would sadly admit that they are still a minority. And I
think all but the really evangelical would admit there will never be a
complete
victory, even in the 21st.

But it's very dangerous to predict; especially the future...

------------------------------------------------------------------------------

Bill Taylor w...@math.canterbury.ac.nz
------------------------------------------------------------------------------

Classical: "now" is the last instant of the past.
Quantum: "now" has a short but nonzero duration - namely the extent
of future which will be collapsed by the next observation.
------------------------------------------------------------------------------

;
--
| Fidonet: mat...@math.canterbury.ac.nz 1:12/98.1
| Internet: mathwft_math_c...@p1.f98.n12.z1.gryn.org

Lee Rudolph

unread,
May 10, 1996, 3:00:00 AM5/10/96
to

mat...@math.canterbury.ac.nz (Bill Taylor) writes:

>But it's very dangerous to predict; especially the future...

I'm fond of Peierl's expanded variant of that quip:

We may try to analyze the problem [of irreversibility]
somewhat more deeply by asking why it is that we can easily
perform experiments in which initial conditions have to be
specified, but never any requiring terminal conditions.
This is the real distinction between past and future.
A little thought shows that this is connected with the
fact that we can remember the past, and that we can make
plans for the future, but not vice versa.

Lee Rudolph


me...@cars3.uchicago.edu

unread,
May 10, 1996, 3:00:00 AM5/10/96
to

In article <4moonl$a...@agate.berkeley.edu>, msk...@ix.netcom.com (Stephen Paul King) writes:

>I am working toward a Master's degree in philosophy with an interest in
>quantum gravity and have found Mackey's work intriging. Also, I have
>been wondering if Goedel's Incompletedness Theorem, if interpreted in
>physical terms, would raise question about the assumptions of the Big
>Bang theory: To wit, if the universe is a finite closed system having a
>total mass-energy content of zero, would this not be equivalent to a
>system of logic that is completely provable to be self-consistent from
>within. Goedel's work says: No. Is this relevant? Or am I pushing an
>idea too far?
>
My gut feeling is that you push it too far, but also that it may be
worthwhile too keep pushing. Just like the moderator I can't see how
the Universe is equivalent to a system of logic but, if any sort of
equivalence may be found, it'll certainly be quite interesting.

Mati Meron | "When you argue with a fool,
me...@cars.uchicago.edu | chances are he is doing just the same"

[Mod. Note: We may be pushing it too far from actual physics. Unless
some physical connection between such a formal system and
physics can actually be drawn, let me suggest no further
follow-ups. crb]

john baez

unread,
May 12, 1996, 3:00:00 AM5/12/96
to

In article <4n2sp6$d...@news1.t1.usa.pipeline.com> egr...@nyc.pipeline.com(Edward Green) writes:

>Goedel's result, in my understanding, says that any system of axioms must
>either be inconsistent or incomplete (please correct my understanding if
>necessary). "Inconsistent" of course means "capable of proving
>contradictory theorems" and "incomplete" means "there exist true
>statements that cannot be proven".

Your definition of incomplete is bad because it relies on the somewhat
murky notion of "truth". Say I write down some long complicated list of
axioms you've never seen before. What does it mean for a statement to
be "true", if it's not provable from the axioms? The usual definition
of incompleteness is that there are statements such that neither they
nor their negation can be proven. This is nicely dual to the definition
of inconsistency, which is that there are statements such that BOTH they
and their negation can be proven.

Anyway, there are lots of consistent and complete axiom systems.
Goedel's theorem says that any "sufficiently powerful" finite axiom
system expressible using first-order logic cannot be both complete and
consistent. There are different ways to make that "sufficiently
powerful" condition precise. For example, there is a nice axiom system
called Q which deals with the natural numbers. Its language contains
symbols for 0, +, x, and ', meaning "successor". (The successor n' of n
is n+1.) The axioms in English form are:

if n' = m' then n = m

for no n is 0 = n'

if n is not equal to 0 then n = m' for some m

n + 0 = n for all n

n + m' = (n + m)' for all n, m

n x 0 = 0 for all n

n x m' = n x m + m for all n, m

Here I am leaving out the axioms of first-order logic (familiar stuff
about "and", "or", "not", "if ... then", "for all", and "for some".)

This is an incredibly wimpy axiom system. For example, we can't even
prove n + m = m + n in this axiom system! Nonetheless Goedel's theorem
applies to it --- it is "sufficiently strong". Any set of axioms that
lets you deduce the above 7 axioms is stronger than Q, and Goedel's
theorem will apply.

Nonetheless, there are actually interesting axiom systems to which
Goedel's theorem does not apply, so one should not leave out the
"sufficiently strong" condition.

Joshua W. Burton

unread,
May 13, 1996, 3:00:00 AM5/13/96
to

> john baez (ba...@guitar.ucr.edu) wrote:

> Here's a puzzle that I think I know the answer to, but which continues
> to bug me. I assumed above that the universe started out in a low
> entropy homogeneous state. Lots of people talk this way; it isn't just
> me. But isn't it blatantly self-contradictory? Isn't a homogeneous
> distribution of gas HIGH in entropy? That's what they always say in
> statistical mechanics class? Wouldn't a low-entropy state be quite
> different?
>
> This puzzle reaches a crescendo when you read people speak of the gas in
> the early moments of the universe being in thermal equilibrium. Isn't
> that obviously HIGH in entropy, not low? So how come the people also go
> around saying the universe started out LOW in entropy?

High, but not high enough. The entropy-gain from the formation of
a few black holes with event horizons is so enormous that it completely
dominates the entropy from a uniform photon gas in thermal equilibrium.
The whole entropy machine runs because the big bang was deficient in
high-entropy clusters of self-gravitating matter. In the absence of
universally attractive forces like gravity, I agree that a uniform
gas is the highest entropy state available.

I saw a man pursuing the horizon;
+--------------------+
Round and round they sped. | Joshua W. Burton
|
I was disturbed at this; I accosted the man. | (847)677-3902
|
`It is futile,' I said, `you can never---' | jbu...@nwu.edu
|
`You lie,' he cried, and ran on. -- Stephen Crane
+--------------------+


Paul Budnik

unread,
May 14, 1996, 3:00:00 AM5/14/96
to

john baez (ba...@guitar.ucr.edu) wrote:

: In article <4n2sp6$d...@news1.t1.usa.pipeline.com> egr...@nyc.pipeline.com(Edward Green) writes:

: >Goedel's result, in my understanding, says that any system of axioms must
: >either be inconsistent or incomplete (please correct my understanding if
: >necessary). "Inconsistent" of course means "capable of proving
: >contradictory theorems" and "incomplete" means "there exist true
: >statements that cannot be proven".

: Your definition of incomplete is bad because it relies on the somewhat
: murky notion of "truth".

Nonsense. Truth in the context of Godel's proof is not the least bit
murky. The true statements not provable by Godel's theorem are statements
about the consistency of a formal system and equivalent to the halting
problem for some Turing Machine.

: Say I write down some long complicated list of


: axioms you've never seen before. What does it mean for a statement to
: be "true", if it's not provable from the axioms?

In the context of Godel's proof it means a TM does or does not halt.
The truth of the statement is a logical consequence of a potentially
infinite sequence of finite steps. If the system is inconsistent it
is decidable in a finite number of steps. If it is consistent than
there is no objective way to determine this in a finite number of steps
but the statement is absolutely determined.

[...]

: Anyway, there are lots of consistent and complete axiom systems.

But none that are powerful enough to embed a universal TM. Any general
purpose computer is a universal TM. Any system powerful enough to model
a computer is subject to Godel's proof. A nontrivial axiom system that
allows one to define arbitrarily large structures is subject to Godel's
proof.

[...]

: Nonetheless, there are actually interesting axiom systems to which


: Goedel's theorem does not apply, so one should not leave out the
: "sufficiently strong" condition.

It is an essential condition that should not be overlooked. On the
other hand most interesting axioms systems meet the condition.
--
Paul Budnik
pa...@mtnmath.com, http://www.mtnmath.com

daan Strebe

unread,
May 15, 1996, 3:00:00 AM5/15/96
to

In article <4nbnq8$j...@mtnmath.com>, pa...@mtnmath.com (Paul Budnik) wrote:

|>Any general purpose computer is a universal TM [Touring Machine].

Well, in the limit, anyway.

daan

Keith Ramsay

unread,
May 16, 1996, 3:00:00 AM5/16/96
to

Edward Green writes:
|"Inconsistent" of course means "capable of proving
|contradictory theorems" and "incomplete" means "there exist true
|statements that cannot be proven".

john baez:


|Your definition of incomplete is bad because it relies on the somewhat
|murky notion of "truth".

Paul Budnik wrote:
|Nonsense. [...]

Typical Budnik-style response.

| Truth in the context of Godel's proof is not the least bit
|murky. The true statements not provable by Godel's theorem are statements
|about the consistency of a formal system and equivalent to the halting
|problem for some Turing Machine.

Sure. If you want to define "complete, insofar as halting questions
are concerned" you are on safe ground (although it requires that the language
of the formal system in question include sentences which are interpreted
as claims about the halting of Turing machines). It is not strong enough
a condition to imply "completeness", however.

People avoid defining unmodified "completeness" for general formal
systems as Ed did for the reason John gave. One has to have a notion
of truth for all the sentences of the language of the formal system.
One either defines a particular notion of "validity" and defines
a corresponding sense of completeness, or one defines "completeness"
as meaning that for every P, either P is a theorem or not-P is.

Keith Ramsay

David K. Davis

unread,
May 16, 1996, 3:00:00 AM5/16/96
to

Paul Budnik (pa...@mtnmath.com) wrote:

: john baez (ba...@guitar.ucr.edu) wrote:
: : In article <4n2sp6$d...@news1.t1.usa.pipeline.com> egr...@nyc.pipeline.com(Edward Green) writes:

: : >Goedel's result, in my understanding, says that any system of axioms must
: : >either be inconsistent or incomplete (please correct my understanding if

: : >necessary). "Inconsistent" of course means "capable of proving


: : >contradictory theorems" and "incomplete" means "there exist true
: : >statements that cannot be proven".

: : Your definition of incomplete is bad because it relies on the somewhat


: : murky notion of "truth".

: Nonsense. Truth in the context of Godel's proof is not the least bit


: murky. The true statements not provable by Godel's theorem are statements
: about the consistency of a formal system and equivalent to the halting
: problem for some Turing Machine.

: ...

Yes. I think also that Godel's original work involved an arithmetic
assertion that was true (if arithmetic is consistent) but unprovable -
that assertion was the translation into Godel numbers, into aritmetic,
of its own unprovability. The key to all these proofs is finding an
interpretation of the formal system that speaks about its own provability
stuff. Then one constructs a predicate 'unprovable' (within this formal
system), and then shows there's an x such that 'unprovable(x)' is asserting
x is unprovable - but x is just 'unprovable(x)'. So this assertion has to be
true otherwise the system would be inconsistent. In the case of arithmetic
these true statements make definite assertions about our old friends the
numbers. I remember seeing someplace a compilation of arithmetic statements
that are unprovable (from the usual axioms) (and therefore true) - but
can't remember where.

However, I'm a little confused myself. If we take one of the unprovable
(and true) expressions, we can adjoin it to the axiom system. What do
we get then? A non-standard arithmetic? What then becomes of truth?
In set theory, we can adjoin AC (or CH), or something that contradicts
one or te other of these getting other set theories. Out set theory
intuition is maybe a little mushy at the outer limits. But arithmetic?
Again, what becomes of truth? Like I say, I'm a little confused also.

-Dave D.

David K. Davis

unread,
May 17, 1996, 3:00:00 AM5/17/96
to

David K. Davis (dav...@spcunb.spc.edu) wrote:

Actually I meant to say, "adjoin its negation to the axiom system..."

-Dave D.

john baez

unread,
May 17, 1996, 3:00:00 AM5/17/96
to

In the limit where it's a bicycle.


Mike Oliver

unread,
May 17, 1996, 3:00:00 AM5/17/96
to

In article <DrIu2...@spcuna.spc.edu> dav...@spcunb.spc.edu (David K. Davis) writes:

>However, I'm a little confused myself. If we take one of the unprovable
>(and true) expressions, we can adjoin it to the axiom system. What do
>we get then?

The short answer: We get a consistent set of axioms that happen to
prove some false statements.

>A non-standard arithmetic?

If you like. That is, we can describe models in which these axioms
are true. But it's clear that these models are *not* the intended model
for arithmetic.

>What then becomes of truth?

Depends on what it was in the first place. I am a fictionalist; for me
truth refers to what goes well with the intuitive picture that motivates
the theory. In the case of the Goedel sentence for PA, it's clear that
it's true even though unprovable.

>In set theory, we can adjoin AC (or CH), or something that contradicts
>one or te other of these getting other set theories.

We *can*, in the sense of not getting contradictions that weren't there
before. That's not the same thing as saying it's the right thing to do.
There is a fairly clear intuitive picture motivating set theory (the
iterative hierarchy of sets); it is not clear whether or not this
picture is sufficient to decide CH.


Keith Ramsay

unread,
May 18, 1996, 3:00:00 AM5/18/96
to

In article <DrJ2p...@spcuna.spc.edu>, dav...@spcunb.spc.edu (David K.
Davis) wrote:
|: However, I'm a little confused myself. If we take one of the unprovable

|: (and true) expressions, we can adjoin it to the axiom system. What do
|: we get then? A non-standard arithmetic? What then becomes of truth?
[...]

|Actually I meant to say, "adjoin its negation to the axiom system..."

Yes, you get a non-standard "arithmetic". It has a model, but in any
model, the objects formally corresponding to the natural numbers are
more than just {0,1,2,3,...}. For example, if we adjoin the negation
of Godel's sentence to some system of arithmetic, we get a consistent
theory which asserts the existence of an element "x" having certain
properties. But 0 doesn't satisfy them, 1 doesn't, 2 doesn't, and so on.
Thus "x" becomes an auxiliary entity which, due to the prior axioms,
behaves in many respects like an integer, but isn't an integer.

A formal system can be thought of as a machine which cranks out
sentences. There's no guarantee that the sentences are true!
Consistency falls far short of being enough to ensure it.

Keith Ramsay

David K. Davis

unread,
May 19, 1996, 3:00:00 AM5/19/96
to

Keith Ramsay (Rams...@hermes.bc.edu) wrote:
: In article <DrJ2p...@spcuna.spc.edu>, dav...@spcunb.spc.edu (David K.

Now I'm both more enlightened and more confused. Is there then some
canonical model which corresponds to our intuition? Is it the case that
these non-standard models always involve some integer-like auxiliary? If
so, is there some way to take the intersection of all these models to
get a canonical one? I have no idea how you'll answer me, but if you
answer yes to any extent, then I would ask if there's some canonical
model for set theory, and hope for the same answer. And if I get that
same answer, then I ask if for those kind of theories (incomplete or
whatever you would call them) is there always a canonical or simplest
model? (I know Godel used a model of set theory in showing CH consistent
that in some sense was the simplest one compelled by the [other] axioms).

Thanks in advance.

-Dave D.

Paul Budnik

unread,
May 19, 1996, 3:00:00 AM5/19/96
to

Keith Ramsay (Rams...@hermes.bc.edu) wrote:

: Edward Green writes:
: |"Inconsistent" of course means "capable of proving


: |contradictory theorems" and "incomplete" means "there exist true
: |statements that cannot be proven".

: john baez:


: |Your definition of incomplete is bad because it relies on the somewhat
: |murky notion of "truth".

: Paul Budnik wrote:
: |Nonsense. [...]

: Typical Budnik-style response.

To call something nonsense that is nonsense? Yes that is typical.

: | Truth in the context of Godel's proof is not the least bit


: |murky. The true statements not provable by Godel's theorem are statements
: |about the consistency of a formal system and equivalent to the halting
: |problem for some Turing Machine.

: Sure. If you want to define "complete, insofar as halting questions


: are concerned" you are on safe ground (although it requires that the language
: of the formal system in question include sentences which are interpreted
: as claims about the halting of Turing machines). It is not strong enough
: a condition to imply "completeness", however.

To repeat myself that is nonsense.

The true statements that Godel's theorem refers to are equivalent
to the halting problem. You do not need a general notion of mathematical
truth. There is no murkiness to the notion of truth needed for Godel's
proof or to fully define *incompleteness* as Godel defined it.

Keep in mind that there is a mechanical procedure for translating
the axioms for any formal system into a computer program for
enumerating its theorems. We can easily modify such a program
to halt if the program enumerates a contradiction.

: People avoid defining unmodified "completeness" for general formal

: systems as Ed did for the reason John gave.

What *incompleteness* means for *any formal system* in the context of
Godel's proof is not the least bit murky. Check your quote again and
you will notice the Green was defining `incompleteness' and not
completeness. In the context of this discussion that is an extremely
important distinction.

: One has to have a notion

: of truth for all the sentences of the language of the formal system.
: One either defines a particular notion of "validity" and defines
: a corresponding sense of completeness, or one defines "completeness"
: as meaning that for every P, either P is a theorem or not-P is.

The question of truth for mathematical statements that go much
beyond the halting problem is a profound one in the philosophy of
mathematics. I discuss this at length in my web site. It is part
of the genius of Godel that his proof and his definition of
incompleteness is independent of such questions.

Paul Budnik

unread,
May 19, 1996, 3:00:00 AM5/19/96
to

David K. Davis (dav...@spcunb.spc.edu) wrote:
[...]
: In the case of arithmetic

: these true statements make definite assertions about our old friends the
: numbers. I remember seeing someplace a compilation of arithmetic statements
: that are unprovable (from the usual axioms) (and therefore true) - but
: can't remember where.

That a statement is unprovable from the usual axioms does no imply that
it is true. That a statement is not provable says nothing at all about
its truth.

: However, I'm a little confused myself. If we take one of the unprovable
: (and true) expressions, we can adjoin it to the axiom system. What do
: we get then? A non-standard arithmetic?

We get a slightly stronger system. If the statement we adjoin is the
consistency of the original system than we can obviously proof that
consistency in this stronger system. However we cannot prove the consistency
of the new system within itself. One can iterate the consistency
of a formal system in strong ways to get much stronger systems. The
key to doing this is the ordinal numbers. In a sense these define
the strength of a formal system with respect to what is provable in
the system.

We can also adjoin undecidable false statements about the integers.
When we do that we get consistent non standard arithmetic.

: What then becomes of truth?

The truth of halting problems is well defined. There is a hierarchy
of mathematical truth that goes as follows:

Halting problem.

Does a computer have an infinite number of outputs.

Does a computer have an infinite number of outputs an infinite
subset of which are the Godel numbers of TM's which themselves have
an infinite number of outputs.

Iterating the above notion a finite number of times. (The arithmetical
hierarchy.)

Iterating the above statement up to any recursive ordinal.
(The hyperarithmetical hierarchy.)

Statements which require quantifying over the reals.

: In set theory, we can adjoin AC (or CH), or something that contradicts
: one or te other of these getting other set theories. Out set theory
: intuition is maybe a little mushy at the outer limits. But arithmetic?

: Again, what becomes of truth? Like I say, I'm a little confused also.

So are mathematicians. This is a profoundly important issue in mathematics.
My personal opinion is that mathematical truth is a creative concept
that needs to be built up from the halting problem as I just described but
that has no definable limit or end. I think some standard questions in
mathematics such as the continuum hypothesis are neither true or false
in any absolute sense but are only true, false or undecidable relative
to some formal system. Such questions cannot be built up from the
halting problem the way most statements in mathematics can be.

Ilias Kastanas

unread,
May 22, 1996, 3:00:00 AM5/22/96
to

In article <Ramsay-MT-180...@delphi.bc.edu>,
Keith Ramsay <Rams...@hermes.bc.edu> wrote:

Let me add a couple of points... and also try to change
the "spin" on some of Keith's statements!

>In article <DrJ2p...@spcuna.spc.edu>, dav...@spcunb.spc.edu (David K.
>Davis) wrote:

>|: However, I'm a little confused myself. If we take one of the unprovable
>|: (and true) expressions, we can adjoin it to the axiom system. What do

>|: we get then? A non-standard arithmetic? What then becomes of truth?


>[...]
>|Actually I meant to say, "adjoin its negation to the axiom system..."
>
>Yes, you get a non-standard "arithmetic". It has a model, but in any
>model, the objects formally corresponding to the natural numbers are
>more than just {0,1,2,3,...}. For example, if we adjoin the negation
>of Godel's sentence to some system of arithmetic, we get a consistent
>theory which asserts the existence of an element "x" having certain
>properties. But 0 doesn't satisfy them, 1 doesn't, 2 doesn't, and so on.


We can make this specific. Goedel's sentence is actually equivalent
(in PRA, say) to Consis(P), i.e. An R(n), R(n) = n does not code a proof
from PA of a contradiction.

PA actually proves R(0), R(1), R(2)... but it cannot provide a proof
of An R(n). So En ~R(n) is consistent... and it plus PA have a model,
with universe N' = {0, 1, 2... ...x-1, x,... ...y...}. There must be an
x s.t. ~R(x)... it can't be 0, 1, 2... so x is non-standard. (If it were
standard, the proof encoded would really be a proof of a contradiction
in PA).


>Thus "x" becomes an auxiliary entity which, due to the prior axioms,
>behaves in many respects like an integer, but isn't an integer.


x behaves like an integer in _every_ first-order PA respect... N'
satisfies ALL axioms and theorems of PA, with all its elements. PA does
not capture the full truth about N, and N' is not isomorphic to N. So it
seems unfair to say 'x is not an integer'... The fact is, x is not a
_standard_ integer.


>A formal system can be thought of as a machine which cranks out
>sentences. There's no guarantee that the sentences are true!
>Consistency falls far short of being enough to ensure it.


But "true" as used here means we have an intended model in mind,
say N for PA, and we pass judgment on ~Consis(PA) because it is false
in N.

Formal systems, though, are not "responsible" for such! The
consequences of formal system T _are_ true... in any model that satisfies
T! If we want to focus on N, that is our prerogative; but it is not
a basis for badmouthing formal systems, consistent ones, that N does
not satisfy.

Ilias

Keith Ramsay

unread,
May 24, 1996, 3:00:00 AM5/24/96
to

Edward Green writes:
|"Inconsistent" of course means "capable of proving
|contradictory theorems" and "incomplete" means "there exist true
|statements that cannot be proven".

john baez:
|Your definition of incomplete is bad because it relies on the somewhat
|murky notion of "truth".

Paul Budnik:
|Nonsense. Truth in the context of Godel's proof is not the least bit


|murky. The true statements not provable by Godel's theorem are statements
|about the consistency of a formal system and equivalent to the halting
|problem for some Turing Machine.

This doesn't make Ed's informal definition a good definition of
"incomplete".

I wrote:
|Sure. If you want to define "complete, insofar as halting questions
|are concerned" you are on safe ground (although it requires that the language
|of the formal system in question include sentences which are interpreted
|as claims about the halting of Turing machines). It is not strong enough
|a condition to imply "completeness", however.

Paul Budnik:


|To repeat myself that is nonsense.

I believe you misread what I wrote, because you haven't given any
evidence of its being incorrect.

|The true statements that Godel's theorem refers to are equivalent
|to the halting problem. You do not need a general notion of mathematical
|truth. There is no murkiness to the notion of truth needed for Godel's
|proof or to fully define *incompleteness* as Godel defined it.

Godel did not define "incompleteness" this way, and so far as I know
nobody else does either. Godel wrote [this is a translation, naturally]:

If to the Peano axioms we add the logic of Principia Mathematica
(with the natural numbers as the individuals) together with the
axiom of choice for all types), we obtain a formal system S, for
which the following theorems hold:
1. The system S is not complete; that is, it contains propositions
A (and we can in fact exhibit such propositions) for which neither
A nor A-bar is provable and, in particular, it contains (even for
decidable properties F of natural numbers) undecidable problems of
the simple structure (Ex)F(x), where x ranges over the natural numbers.

That Godel proved something stronger than just incompleteness doesn't
make redefining "incomplete" a good idea. Note that he's not offering
a second definition of "incomplete" to replace his first one; he is
merely pointing out that the undecidable proposition is of a specific
kind. In Hodges' Model Theory he defines completeness as "for any
statement sigma, either sigma is provable or ~sigma is provable". Other
references do the same. They specifically avoid using a notion of
"truth" since to do so requires limiting oneself to a given domain.

[...]


|: People avoid defining unmodified "completeness" for general formal
|: systems as Ed did for the reason John gave.
|
|What *incompleteness* means for *any formal system* in the context of
|Godel's proof is not the least bit murky.

Correct. It means "there is an A such that neither A nor ~A is a
theorem". It's a standard definition. I see no way to interpret him
as giving an alternative definition of "incomplete". He did not
define it as Ed Green did. The definition of "incomplete" doesn't
refer to restricted types of sentences, even though the proof does
exhibit a specific type of undecidable sentence.

|Check your quote again and
|you will notice the Green was defining `incompleteness' and not
|completeness. In the context of this discussion that is an extremely
|important distinction.

This is pretty funny. I'll have to try to remember it. If you're going
to make a definition, it generally bad if the negation of the definition
doesn't work.

Paul Budnik wrote:
|Nonsense. [...]

I wrote:
|: Typical Budnik-style response.

Paul Budnik:


|To call something nonsense that is nonsense? Yes that is typical.

You regard yourself as having recognized lots of things as "nonsense",
but most of even the false things I've seen you declare nonsensical were
not so much nonsensical as understandable errors. Even declaring actual
nonsense to be nonsense is not *usually* very productive, I find. In
my opinion, you are *way* too ready to conclude with a great sense of
certainty that other people are thinking thoroughly nonsensically. I
think you would be in general a happier person if you got away from
that tendency.

Keith Ramsay

Paul Budnik

unread,
May 26, 1996, 3:00:00 AM5/26/96
to

Keith Ramsay (Rams...@hermes.bc.edu) wrote:

: Edward Green writes:
: |"Inconsistent" of course means "capable of proving
: |contradictory theorems" and "incomplete" means "there exist true
: |statements that cannot be proven".

: john baez:
: |Your definition of incomplete is bad because it relies on the somewhat
: |murky notion of "truth".

: Paul Budnik:
: |Nonsense. Truth in the context of Godel's proof is not the least bit
: |murky. The true statements not provable by Godel's theorem are statements
: |about the consistency of a formal system and equivalent to the halting
: |problem for some Turing Machine.

: This doesn't make Ed's informal definition a good definition of
: "incomplete".

There is nothing `bad' about it. We do not have an algorithm
to decide mathematical truth but mathematical truth is still an essential
concept in mathematics and definitions that use it are not in any
sense `bad'. It is of course important to understand the limitations
and scope of these definitions.

: I wrote:
: |Sure. If you want to define "complete, insofar as halting questions
: |are concerned" you are on safe ground (although it requires that the language
: |of the formal system in question include sentences which are interpreted
: |as claims about the halting of Turing machines). It is not strong enough
: |a condition to imply "completeness", however.

: Paul Budnik:
: |To repeat myself that is nonsense.

: I believe you misread what I wrote, because you haven't given any
: evidence of its being incorrect.

I did not say it was incorrect. I said it was nonsense. Complete is
definable in terms of mathematical truth. The fact that we do not
have a formula to decide mathematical truth or general agreement
on what mathematical truth is does not imply that we cannot give
definitions in terms of it. We do have agreement that there is such
a thing as mathematical truth and we have broad agreement on how
to decide mathematical truth in a limited way.

The consistency of any formal system is equivalent to the halting
problem. That does not require any special interpretation. It
follows from the definition of formal system.

[...]
: |What *incompleteness* means for *any formal system* in the context of


: |Godel's proof is not the least bit murky.

: Correct. It means "there is an A such that neither A nor ~A is a
: theorem". It's a standard definition. I see no way to interpret him
: as giving an alternative definition of "incomplete". He did not
: define it as Ed Green did. The definition of "incomplete" doesn't
: refer to restricted types of sentences, even though the proof does
: exhibit a specific type of undecidable sentence.

My point was that the definition of incompleteness that Green gave
makes perfect sense in the context of Godel's proof. The question is
not whether one should redefine the term but rather the `murkiness'
of the definition in question. Godel's result is often spoken of
in just the terms Green used. For example Kleene writes in his
commentary on Godel's proof (from Godel's collected works, V 1, p 127):

However, provability in number theory can be defined in number theory.
Therefore if the provable statements are all true there must be
some true but unprovable formula. Thus Godel came to find the results
he published in 1931, ...

: |Check your quote again and


: |you will notice the Green was defining `incompleteness' and not
: |completeness. In the context of this discussion that is an extremely
: |important distinction.

: This is pretty funny. I'll have to try to remember it. If you're going
: to make a definition, it generally bad if the negation of the definition
: doesn't work.

Given that the halting problem is decidable but its negation is not
ones need to be careful about negating definitions. One can decide
that certain systems are incomplete by Green's definition. It is
more problematic to talk about systems being complete because then
you need some notion of true that applies to everything definable
in the system.

: |To call something nonsense that is nonsense? Yes that is typical.

: You regard yourself as having recognized lots of things as "nonsense",
: but most of even the false things I've seen you declare nonsensical were
: not so much nonsensical as understandable errors.

Is there some reason why an understandable error cannot be nonsense?
Are not most errors understandable?

: Even declaring actual


: nonsense to be nonsense is not *usually* very productive,

Posting to Usenet is not *usually* very productive. What are you expecting?

: I find. In


: my opinion, you are *way* too ready to conclude with a great sense of
: certainty that other people are thinking thoroughly nonsensically.

You are thoroughly misreading me. I certainly do not believe that
Baez is `thinking thoroughly nonsensically'. He is one of the most sensible
posters to this group even if he does have some strange ideas about Everett's
interpretation and philosophy and probability. Occasional he posts nonsense
as does any fallible human being who posts frequently.

What Baez was doing and what you are doing is nitpicking about technical
points in a way that leads you to say silly nonsensical things.

: I

: think you would be in general a happier person if you got away from
: that tendency.

It is generally a bad idea to give out psychological advice on Usenet.
You cannot know much about anyone based on their postings. It is
easy to misinterpret the literal content of postings and almost
impossible to read out motivations or character traits from them.

I do not possess the tendency you wish me to get away from. I am just
not very diplomatic in my language. I think it makes for better postings
as long is the undiplomatic language refers only to someone ideas or
claims and not to their character or abilities.

Toby Bartels

unread,
May 27, 1996, 3:00:00 AM5/27/96
to

Paul Budnik <pa...@mtnmath.com> wrote in part:

>Mathematical truth is still an essential concept in mathematics.

It's not essential at all.
You can do all of mathematics perfectly well without referring to it.
If you do refer to it in a definition,
you're just using some undefined predicate,
and could just as easily call it `frunginess'.
Perhaps you even have some axioms about frunginess,
such as, for all propositions p, frungy (p) xor frungy (not p).
Just stick those in if you want them.

>We do have agreement that there is such a thing as mathematical truth.

You have no such thing (nor do you need it); I am a counterexample.

>Godel's result is often spoken of in just the terms Green used.

This is irrelevant.


-- Toby
to...@ugcs.caltech.edu

Matthew P Wiener

unread,
May 27, 1996, 3:00:00 AM5/27/96
to

In article <4obms7$q...@gap.cco.caltech.edu>, toby@ugcs (Toby Bartels) writes:
>Paul Budnik <pa...@mtnmath.com> wrote in part:

>>Mathematical truth is still an essential concept in mathematics.

>It's not essential at all. You can do all of mathematics perfectly
>well without referring to it.

Well, logic might have a burp.
--
-Matthew P Wiener (wee...@sagi.wistar.upenn.edu)

Paul Budnik

unread,
May 27, 1996, 3:00:00 AM5/27/96
to

Toby Bartels (to...@ugcs.caltech.edu) wrote:
: Paul Budnik <pa...@mtnmath.com> wrote in part:

: >Mathematical truth is still an essential concept in mathematics.

: It's not essential at all.
: You can do all of mathematics perfectly well without referring to it.

: If you do refer to it in a definition,


: you're just using some undefined predicate,
: and could just as easily call it `frunginess'.
: Perhaps you even have some axioms about frunginess,
: such as, for all propositions p, frungy (p) xor frungy (not p).
: Just stick those in if you want them.

Of course you can change the name of `truth' or anything else.
So what. What is central to mathematics is that certain propositions
are deducible from certain formal systems. Without that formal
mathematics is impossible. With it you have the objective truth
of at least finite mathematics.

: >We do have agreement that there is such a thing as mathematical truth.

: You have no such thing (nor do you need it); I am a counterexample.

I would not be so foolish is to claim *universal* agreement about anything.
I am sure someone on usenet would be happy to argue that the world came to
an end yesterday and all subsequent experience is an illusion.

There is general agreement that objective mathematical truth exists.
The vast majority of mathematicians will agree
on at least the objective truth of finite mathematics.

Ilias Kastanas

unread,
May 28, 1996, 3:00:00 AM5/28/96
to

Comments on a number of postings in this thread.

john baez (ba...@guitar.ucr.edu) wrote:
: In article <4n2sp6$d...@news1.t1.usa.pipeline.com> egr...@nyc.pipeline.com(Edward Green) writes:

: >Goedel's result, in my understanding, says that any system of axioms must
: >either be inconsistent or incomplete (please correct my understanding if

: >necessary). "Inconsistent" of course means "capable of proving


: >contradictory theorems" and "incomplete" means "there exist true
: >statements that cannot be proven".

: Your definition of incomplete is bad because it relies on the somewhat


: murky notion of "truth".

Nonsense. Truth in the context of Godel's proof is not the least bit


murky. The true statements not provable by Godel's theorem are statements
about the consistency of a formal system and equivalent to the halting
problem for some Turing Machine.

1) For the _integers_, the standard model N = {0, 1, 2...}, "truth"
of _any arithmetical statement_ like Am En Ak R(m,n,k) (R recursive... or
even polynomial (~)equality) is absolutely clear. We know what 'true' or
'false' means, and one of them holds. (Never mind we have no algorithm).

2) "Equivalent to the h.p. for some T.M." can be shortened to "Pi-0-1"
-- that is, of the form An R(n).

3) Goedel's proof does operate on arithetical wffs. But we don't use
truth in N to define "incomplete" because a stronger result is available,
namely "1-incomplete" (the undecidable sentence is Pi-0-1: Am "m does not
code a proof from T of 0=1" <=> Consis(T) ).

: Say I write down some long complicated list of
: axioms you've never seen before. What does it mean for a statement to
: be "true", if it's not provable from the axioms?

In the context of Godel's proof it means a TM does or does not halt.
The truth of the statement is a logical consequence of a potentially
infinite sequence of finite steps. If the system is inconsistent it
is decidable in a finite number of steps. If it is consistent than
there is no objective way to determine this in a finite number of steps
but the statement is absolutely determined.


The statement in question is not necessarily Consis(T)...

4) P, or ~P might or might not be provable from T. P being (semanti-
cally) "true" is another matter: it means every model M that satisfies T
also satisfies P. Luckily this happens iff P is provable from T, by
guess-who's theorem.

[...]

: Anyway, there are lots of consistent and complete axiom systems.

But none that are powerful enough to embed a universal TM. Any general
purpose computer is a universal TM. Any system powerful enough to model
a computer is subject to Godel's proof. A nontrivial axiom system that
allows one to define arbitrarily large structures is subject to Godel's
proof.

This needs some care.

5) It is possible to axiomatize the (true statements about)
N with 0 and S (successor). (The language has 0, S and = only).
It is a complete, consistent theory and all cofinite subsets of N
are definable.


Keith Ramsay (Rams...@hermes.bc.edu) wrote:


: Sure. If you want to define "complete, insofar as halting questions
: are concerned" you are on safe ground (although it requires that the language
: of the formal system in question include sentences which are interpreted
: as claims about the halting of Turing machines). It is not strong enough
: a condition to imply "completeness", however.

To repeat myself that is nonsense.


Uh, no.

6) Keith is saying that the ability to decide any Pi-0-1 statement
("1-completeness") is far from being able to decide any statement (com-
pleteness)... for Peano Arithmetic, say.


The true statements that Godel's theorem refers to are equivalent
to the halting problem. You do not need a general notion of mathematical
truth. There is no murkiness to the notion of truth needed for Godel's
proof or to fully define *incompleteness* as Godel defined it.


Well, there is no murkiness to _any_ level, not just level 1 (h.p.)..


Keep in mind that there is a mechanical procedure for translating
the axioms for any formal system into a computer program for
enumerating its theorems. We can easily modify such a program
to halt if the program enumerates a contradiction.


7) Yes, the set of (codes of) theorems is Sigma-0-1 (while the
set of statements true in N is not Sigma-0-n for any n... alternate
proof of Incompleteness).


: People avoid defining unmodified "completeness" for general formal
: systems as Ed did for the reason John gave.

What *incompleteness* means for *any formal system* in the context of
Godel's proof is not the least bit murky. Check your quote again and


you will notice the Green was defining `incompleteness' and not
completeness. In the context of this discussion that is an extremely
important distinction.

: One has to have a notion

: of truth for all the sentences of the language of the formal system.
: One either defines a particular notion of "validity" and defines

...usually, via an intended model (say, N for PA)

: a corresponding sense of completeness, or one defines "completeness"
: as meaning that for every P, either P is a theorem or not-P is.


Formal systems are syntactic, i.e. about provability. For
"truth", "validity" etc. you need to bring in _models_.

[ . . . ]

Godel did not define "incompleteness" this way, and so far as I know
nobody else does either. Godel wrote [this is a translation, naturally]:

If to the Peano axioms we add the logic of Principia Mathematica
(with the natural numbers as the individuals) together with the
axiom of choice for all types), we obtain a formal system S, for
which the following theorems hold:
1. The system S is not complete; that is, it contains propositions
A (and we can in fact exhibit such propositions) for which neither
A nor A-bar is provable and, in particular, it contains (even for
decidable properties F of natural numbers) undecidable problems of
the simple structure (Ex)F(x), where x ranges over the natural numbers.


In retrospect, AC and higher types are not needed. F(x) is,
of course, false for all x (in N).

That Godel proved something stronger than just incompleteness doesn't
make redefining "incomplete" a good idea. Note that he's not offering
a second definition of "incomplete" to replace his first one; he is
merely pointing out that the undecidable proposition is of a specific
kind. In Hodges' Model Theory he defines completeness as "for any
statement sigma, either sigma is provable or ~sigma is provable". Other
references do the same. They specifically avoid using a notion of
"truth" since to do so requires limiting oneself to a given domain.


Right; whereas Goedel's theorem holds for any system extending
a weak fragment of PA (usually Q, Robinson's arithmetic).


Keith Ramsay (Rams...@hermes.bc.edu) wrote:

: I believe you misread what I wrote, because you haven't given any
: evidence of its being incorrect.

I did not say it was incorrect. I said it was nonsense. Complete is
definable in terms of mathematical truth. The fact that we do not


Only if you have an intended model in mind.

8) A theory (e.g. PA) can have lots of different completions
and lots of nonstandard models.

The theory of groups does not decide commutativity -- nor
should it!


have a formula to decide mathematical truth or general agreement
on what mathematical truth is does not imply that we cannot give
definitions in terms of it. We do have agreement that there is such
a thing as mathematical truth and we have broad agreement on how
to decide mathematical truth in a limited way.

Given that the halting problem is decidable but its negation is not


ones need to be careful about negating definitions. One can decide
that certain systems are incomplete by Green's definition. It is
more problematic to talk about systems being complete because then
you need some notion of true that applies to everything definable
in the system.


9) Incomplete implies consistent; sure enough, PA is consistent
because N models PA (never mind that PA cannot prove its consistency),
and incomplete too. Again, look at the plethora of completions
PA has! Continuum many, in fact. Don't think of N's "truth" when
looking at them; it is necessarily violated. E.g. Consis(PA) is
undecidable; hence there are models of PA + Consis(PA), and also
models of PA + ~Consis(PA). And so on.


Ilias

Gordon Long

unread,
May 28, 1996, 3:00:00 AM5/28/96
to

Paul Budnik:
> Mathematical truth is still an essential concept in mathematics.

Toby Bartels:


> It's not essential at all.

> [...] you're just using some undefined predicate, and could just as

> easily call it `frunginess'.

Paul Budnik:


>Of course you can change the name of `truth' or anything else.
>So what. What is central to mathematics is that certain propositions
>are deducible from certain formal systems. Without that formal
>mathematics is impossible. With it you have the objective truth
>of at least finite mathematics.
>

I don't understand your point. I sounds like you're using "certain
propositions are deducible from certain formal systems" to define
mathematical 'truth'. If so, then how can you claim that there exist
'true' statements in a formal system that are not deducible? By your
own definition, these 'true' statements can't be true.

- Gordon

--
#include <std.disclaimer>
Gordon Long | Grad. Student, High Energy Physics
gor...@schwinger.physics.umd.edu | University of Maryland

Tim Poston

unread,
May 29, 1996, 3:00:00 AM5/29/96
to

Keith Ramsay (Rams...@hermes.bc.edu) wrote:
: In article <DrJ2p...@spcuna.spc.edu>, dav...@spcunb.spc.edu (David K.

: Davis) wrote:
: |: However, I'm a little confused myself. If we take one of the unprovable
: |: (and true) expressions, we can adjoin it to the axiom system. What do
: |: we get then? A non-standard arithmetic? What then becomes of truth?
: [...]
: |Actually I meant to say, "adjoin its negation to the axiom system..."

: Yes, you get a non-standard "arithmetic". It has a model, but in any
: model, the objects formally corresponding to the natural numbers are
: more than just {0,1,2,3,...}. For example, if we adjoin the negation
: of Godel's sentence to some system of arithmetic, we get a consistent
: theory which asserts the existence of an element "x" having certain
: properties. But 0 doesn't satisfy them, 1 doesn't, 2 doesn't, and so on.

: Thus "x" becomes an auxiliary entity which, due to the prior axioms,

: behaves in many respects like an integer, but isn't an integer.

You mean, "isn't one of the integers you first thought of".
It has _all_ the properties of an integer required in the
"some system of arithmetic" you adjoined the extra axiom to.
If "and so on" is made precise by referring to the axiom of induction
(usually present in such systems), it has all the "and so on"
properties you could ask for... except for outside-the-system ones,
like occurring in a previously chosen model of the system.

But then you're defining "integer" as "element of this model",
rather than working from axiom systems.

Tim
____________________________________________________________________________
Tim Poston Institute of Systems Science, National University of Singapore
A billion dollars here, a billion dollars there,
and pretty soon you're talking Orbital Yen.

Toby Bartels

unread,
May 29, 1996, 3:00:00 AM5/29/96
to

Paul Budnik <pa...@mtnmath.com> wrote:

>What is central to mathematics is that certain propositions
>are deducible from certain formal systems. Without that formal
>mathematics is impossible. With it you have the objective truth
>of at least finite mathematics.

How does the objective truth of a statement such as 3 + 4 = 7 come from
the fact that certain propositions are provable in certain formal systems?
I can't think of any way to do this that doesn't let me deduce
the objective truth of 3 + 4 != 7, but I may be missing something.

Also, is the objective truth of only finite mathematics enough?
I have to think about that...

>Toby Bartels <to...@ugcs.caltech.edu> wrote:

>:Paul Budnik <pa...@mtnmath.com> wrote:

>:>We do have agreement that there is such a thing as mathematical truth.

>:You have no such thing (nor do you need it); I am a counterexample.

>I would not be so foolish as to claim *universal* agreement about anything.

True.
The importance of my example is that I do mathematics just fine
without believing in mathematical truth, but only in the formalities.
(But I'm not a counterexample to the sentence I was responding to.)

>I am sure someone on usenet would be happy to argue that the world came to
>an end yesterday and all subsequent experience is an illusion.

You may want to see my recent posts to sci.physics with the Subject
'Toward a Transformative Hermeneutics of Quantum Gravity' :-).
(Not that the nature of physical reality impinges on our discussion.)

>There is general agreement that objective mathematical truth exists.
>The vast majority of mathematicians will agree
>on at least the objective truth of finite mathematics.

Yes, even formalists seem to be susceptible to the truth of arithmetic.


-- Toby
to...@ugcs.caltech.edu

Daryl McCullough

unread,
May 29, 1996, 3:00:00 AM5/29/96
to

to...@ugcs.caltech.edu (Toby Bartels) writes:

>Paul Budnik <pa...@mtnmath.com> wrote in part:

>>Mathematical truth is still an essential concept in mathematics.

>It's not essential at all.


>You can do all of mathematics perfectly well without referring to it.
>If you do refer to it in a definition,

>you're just using some undefined predicate,
>and could just as easily call it `frunginess'.

>Perhaps you even have some axioms about frunginess,
>such as, for all propositions p, frungy (p) xor frungy (not p).
>Just stick those in if you want them.

Quite a number of people hold such a position about mathematical
truth, but personally, I agree with Paul---I find such a position
nonsensical.

Presumably, you view mathematical statements as nothing more than
strings of symbols derived according to arbitrary rules. So you
would not say "1 + 0 = 1" is true, you would instead say that it
is derivable from PA (actually, in PA it would probably be written
as "S(0) + 0 = S(0)"). But what about the claim that it is derivable
from PA? Is that true, or is it again only a string of symbols derived
according to arbitrary rules?

Anyway, getting back to Goedel's incompleteness theorem. I think
that Ed Green's originally formulation is pretty much right. The
importance of incompleteness is that all *truths* cannot be derived
in any sufficiently powerful system. John Baez objected to this
because many theories have no intended interpretation. That is
certainly correct. However, it is impossible to *prove* that a
system is "sufficiently powerful" for Goedel's theorem to be
applicable unless one can interpret elementary arithmetic in it.
For the subset of the language that can be interpreted as saying
something about arithmetic, there *is* a notion of truth (or at
least many of us agree that there is).

It is true that completeness and consistency can be given syntactic
definitions that don't involve truth: A theory is consistent if it
is impossible to derive both Phi and ~Phi, a theory is complete if it
is always possible to derive exactly one of them. However, by Goedel's
*completeness* theorem for first-order logic, these syntactic notions
have a clear connection with the semantic notion of truth in a model.

Daryl McCullough
ORA Corp.
Ithaca, NY

Gordon Swobe

unread,
May 30, 1996, 3:00:00 AM5/30/96
to

> pa...@mtnmath.com (Paul Budnik) wrote in article
<4ocn6m$m...@mtnmath.com>...
> Toby Bartels (to...@ugcs.caltech.edu) wrote:
> : Paul Budnik <pa...@mtnmath.com> wrote in part:
>
> : >Mathematical truth is still an essential concept in mathematics.

>
> : It's not essential at all.

The debate here is that between the mathematical platonists and the
formalists. Those who assert the existence of objective mathematical truth
fall into the platonist's camp while those who do not are generally
formalists. Formalists assert that mathematical statements are nothing
more than products of the axioms of the formal system from which they are
derived, while platonists believe mathematical truths are objectively real
and therefore discoveries rather than mere mental constructions or
inventions of mathematicians. It is generally believed that Kurt Godel
disproved the philosophical validity of mathematical formalism by proving
that all formal systems contain at least one true statement not provable
by the axioms of the system. The true but unprovable statement is "seen"
to be true though it cannot be proved within the system. How is it seen to
be true? According to platonists, the human mind is capable of seeing
truth through revelation and insight into the platonic realm of pure
objective ideas.

Is mathematical truth an essential concept in mathematics? I would say it
is not. It is, however, an essential concept in the philosophy of science.
Mathematicians can function quite well without ever worrying about the
veracity of their mathematical equations, so long as all mathematicians
agree on the axioms. :) If all mathematicians decided to agree that 1+1=3
then all mathematicians would be happy. The philosophers would not be so
pleased.

Dale W Hall

unread,
May 30, 1996, 3:00:00 AM5/30/96
to

In article <01bb4e11.37d90280$774466c6@gordons>,
Gordon Swobe <gsw...@rtd.com> wrote:
... deleted stuff ...

>
>Is mathematical truth an essential concept in mathematics? I would say it
>is not. It is, however, an essential concept in the philosophy of science.
>Mathematicians can function quite well without ever worrying about the
>veracity of their mathematical equations, so long as all mathematicians
>agree on the axioms. :) If all mathematicians decided to agree that 1+1=3
>then all mathematicians would be happy.
^^^
Make that both. If mathematics were that divorced from observable
reality, it would be an irrelevant exercise. In such a world,
mathematicians would all live in plywood and tarpaper shacks in Montana,
and make their livelihoods doing other than mathematics. Their number
would dwindle considerably.

To say that mathematics constitutes a meaningful field of study is to
claim that (some of) the objects of mathematical investigation have a
nontrivial connection to the objects of concern to "normal people", for
instance, bean counters or bridge builders or engineers of various
stripes or physicists. Can we know that this claim has merit? For some
areas, the recognition is immediate, whereas for others it is long in
coming.

If, for example, we as mathematicians were to tout this mathematical
thing called "addition", saying it would enable you to determine how
many places to set at dinner if it were just you and your (solitary)
sweetie tonight, and would perform other wonders like balancing your
checkbook and the like, it would be incumbent on us to have a theory
that actually corresponded to what was observed. In short, if the
proponent of the theory of "addition" maintained as an axiom "1+1=3"
then he would only be able to sell his theory to people holding seders
for parties of two.

What seems to be being ignored in this discussion is that when we say
"this theory [T] is meaningful" in a particular case [C], and mention
ANYTHING (for instance, the facts or problems of case C) outside of the
strict confines of the theory T, it is necessary to have a notion of
truth, at least to the extent that one can state that the axioms of theory
T (as well as its production rules) are valid in the case C.

The strict formalists, quite properly I suppose, claim that this
correspondence has nothing to do with mathematics. After all, when I
solve a particular mathematical problem, or prove a particular theorem,
or extend an existing theory, I am not at all worried about anything
outside of the theory I am developing. I don't care [at the moment] to
solve the problems of the world, such as poverty, racism, my dripping
faucet, finding the perfect laugh track, or any other of the recognized
Great Problems of Western Civilization. All that matters is the theory
under discussion. Now, it turns out, whenever I am aware of (or am made
aware of) relationships between My Problem and any other mathematical
problem, I may (must?) reasonably worry about the implications between My
Work and Other Work: once a correspondence is known to exist between two
areas of work, they cease to be seen as independent theories.

I can't quite buy this "not my job" notion of mathematics, and I suspect
that working mathematicians don't really buy it either. Certainly, my
remarks about forgetting [for the moment] about the problems of the rest
of the world while doing mathematics are important. Those problems can,
and I believe will usually, distract the mathematician from the task at
hand, at the very least forcing either the introduction of unwarranted
assumptions or (a separate version of the same thing) the use of an
incomplete or (perhaps subtly) contradictory model of the phenomenon being
studied.

That said, you're darned tootin' that I will consider in my aesthetic
judgments such things as the applicability of this or that aspect of My
Work to some more widely-recognized area of concern. The correspondence
may be to some purely mathematical consideration, say the notion that theory
T and theory T' are similar (in terms of a functional correspondence between
the objects studied, and a similarity between the patterns and methods of
proof) or isomorphic (where one would have a bijection between objects and
much more than a similarity in terms of proofs in T and T'). On the
other hand, the notion of applicability may be more conventional, i.e.,
a true or imagined "application" of the Results of My Work to a
(perhaps) non-mathematical area of consideration.

In either case, I was careful to note this as constituting a type of
aesthetic judgment; in following one's aesthetic sense, one generally
uses quality judgments to choose among options, for instance,
identification of axioms in the (Primary) mathematical theory with
recognized features of the one being used as an intended application.
This is where the bogus "1+1=3" axiom fails. No one who was thinking of
modeling (what almost anyone thinks of as) the integers under normal
circumstances would ever consider this statement as an axiom.

OK, I've engaged in an ample quantity of rambling. I would like to point
out that I've noticed that, in scanning abstracts of mathematics papers
and talks, I can spot a likely resident of an intellectual/mathematical
backwater by simply noting how unconnected the mathematics is to
ANYTHING. I should bring in a few issues of the old AMS Notices and
copy a few abstracts for your perusal. "Lower Semi-invariant Tensor modules
over Left Pseudo-Screenable Hyperalgebras are Quasi-meta-chainable."
would almost surely be written by someone at Western Upper Mississippi Delta
Teachers' College for the Visual Arts and Tractor/Trailer Driving School.

My point is that, while there is no legal or even formal requirement to
have ones mathematics "toe the line" in terms of strong connection with
the outside world, mathematics that persists in rejecting the outside
world has a strong tendency towards the baroque or rococo: much flash,
embellishment and frilliness with little in the way of new mathematics.

Dale.


Toby Bartels

unread,
Jun 3, 1996, 3:00:00 AM6/3/96
to

Gordon Swobe <gsw...@rtd.com> wrote:

>The debate here is that between the mathematical platonists and the
>formalists. Those who assert the existence of objective mathematical truth
>fall into the platonist's camp while those who do not are generally
>formalists. Formalists assert that mathematical statements are nothing
>more than products of the axioms of the formal system from which they are
>derived, while platonists believe mathematical truths are objectively real
>and therefore discoveries rather than mere mental constructions or
>inventions of mathematicians. It is generally believed that Kurt Godel
>disproved the philosophical validity of mathematical formalism by proving
>that all formal systems contain at least one true statement not provable
>by the axioms of the system. The true but unprovable statement is "seen"
>to be true though it cannot be proved within the system. How is it seen to
>be true? According to platonists, the human mind is capable of seeing
>truth through revelation and insight into the platonic realm of pure
>objective ideas.

This is generally believed by the Platonists.
Those crazy formalists keep insisting they *can't* "see" the statement's truth.
It just corresponds in a certain way to a theorem about formal systems.

>Is mathematical truth an essential concept in mathematics? I would say it
>is not. It is, however, an essential concept in the philosophy of science.

I've refused mathematical truth in mathematics and scientific truth in science.
It should come as no surprise to anyone that I don't believe
mathematical truth is a necessary concept in science, nor in its philosophy.
A complete scientific theory will include a formal system
of mathematical axioms and rules of inference (ZF is traditional).
You don't need mathematical truth; you just say the physical situation
is modelled by *this* formal system and not (necessarily) some other system.

>Mathematicians can function quite well without ever worrying about the
>veracity of their mathematical equations, so long as all mathematicians
>agree on the axioms. :) If all mathematicians decided to agree that 1+1=3

>then all mathematicians would be happy. The philosophers would not be so
>pleased.

And after having agreed on this, someone would come up with PA
and everyone would *also* adopt 1+1=2 (possibly with different notation).
At least, they would if mathematicians in this parallel universe think PA
is as interesting as mathematicians in this universe think it is.


-- Toby
to...@ugcs.caltech.edu

Toby Bartels

unread,
Jun 3, 1996, 3:00:00 AM6/3/96
to

Dale W Hall <dah...@lynx.dac.neu.edu> wrote:

>Gordon Swobe <gsw...@rtd.com> wrote:

>>Is mathematical truth an essential concept in mathematics? I would say it
>>is not. It is, however, an essential concept in the philosophy of science.
>>Mathematicians can function quite well without ever worrying about the
>>veracity of their mathematical equations, so long as all mathematicians
>>agree on the axioms. :) If all mathematicians decided to agree that 1+1=3
>>then all mathematicians would be happy.
> ^^^
>Make that both. If mathematics were that divorced from observable
>reality, it would be an irrelevant exercise. In such a world,
>mathematicians would all live in plywood and tarpaper shacks in Montana,
>and make their livelihoods doing other than mathematics. Their number
>would dwindle considerably.

You mean if mathematicians didn't also invent PA
(possibly with different notation), mathematicians would be few.
Obviously, if mathematicians are that uninventive,
no one would want to join their number.
Observable reality has nothing to do with this.

>To say that mathematics constitutes a meaningful field of study is to
>claim that (some of) the objects of mathematical investigation have a
>nontrivial connection to the objects of concern to "normal people", for
>instance, bean counters or bridge builders or engineers of various
>stripes or physicists. Can we know that this claim has merit? For some
>areas, the recognition is immediate, whereas for others it is long in
>coming.

O, boy! You've just upset a whole bunch of people
(if there are that many still reading this).
THE MEANINGFULNESS OF MATHEMATICS IS MORE THAN THAT DERIVED FROM APPLICATIONS!
It also derives meaning from being interesting in its own right.

>If, for example, we as mathematicians were to tout this mathematical
>thing called "addition", saying it would enable you to determine how
>many places to set at dinner if it were just you and your (solitary)
>sweetie tonight, and would perform other wonders like balancing your
>checkbook and the like, it would be incumbent on us to have a theory
>that actually corresponded to what was observed. In short, if the
>proponent of the theory of "addition" maintained as an axiom "1+1=3"
>then he would only be able to sell his theory to people holding seders
>for parties of two.

This mathematics is not applicable to dinner parties. So what?

>What seems to be being ignored in this discussion is that when we say
>"this theory [T] is meaningful" in a particular case [C], and mention
>ANYTHING (for instance, the facts or problems of case C) outside of the
>strict confines of the theory T, it is necessary to have a notion of
>truth, at least to the extent that one can state that the axioms of theory
>T (as well as its production rules) are valid in the case C.

We're ignoring this because we don't care.
What's C got to do with mathematics?

>The strict formalists, quite properly I suppose, claim that this
>correspondence has nothing to do with mathematics.

Of course we won't. The application of mathematics obviously has to do
with mathematics. OTOH, the strict Platonists, as well as those in the middle,
will join with us in claiming that it *is* not mathematics.

>After all, when I
>solve a particular mathematical problem, or prove a particular theorem,
>or extend an existing theory, I am not at all worried about anything
>outside of the theory I am developing.

deletions

>Now, it turns out, whenever I am aware of (or am made
>aware of) relationships between My Problem and any other mathematical
>problem, I may (must?) reasonably worry about the implications between My
>Work and Other Work: once a correspondence is known to exist between two
>areas of work, they cease to be seen as independent theories.

You may. Why must you? If it doesn't interest you, ignore it.

>I can't quite buy this "not my job" notion of mathematics, and I suspect
>that working mathematicians don't really buy it either.

Nope. Good thing no one is espousing it either.
Part of a mathematician's job is applying mathematics.
However, this is not to say that every mathematician must work on this;
some mathematicians ignore applications while others specialize in them.

deletions

>In following one's aesthetic sense, one generally
>uses quality judgments to choose among options; for instance,


>identification of axioms in the (Primary) mathematical theory with
>recognized features of the one being used as an intended application.
>This is where the bogus "1+1=3" axiom fails. No one who was thinking of
>modeling (what almost anyone thinks of as) the integers under normal
>circumstances would ever consider this statement as an axiom.

How does that make this axiom bogus?
You can study *both* formal systems, or either, or neither.
You seem to think the value 1+1 is all or nothing.
The point of formalism is that one can do whatever one likes,
and this includes everything you espouse in this post.
I won't stop you from doing it (nor claim it's not good mathematics);
please extend me the same courtesy if I happen to want to work with 1+1=3.
Don't worry; I won't try to apply it to dinner parties.

deletions

>My point is that, while there is no legal or even formal requirement to

>have one's mathematics "toe the line" in terms of strong connection with


>the outside world, mathematics that persists in rejecting the outside
>world has a strong tendency towards the baroque or rococo: much flash,
>embellishment and frilliness with little in the way of new mathematics.

Like number theory in the first half of this century?
That didn't have applications either, but it sure was interesting.
Or is that OK because it came from a formal system that had applications?
I wonder why it is formal systems that carve mathematics up
into useful and useless. Your arguments apply equally well
to divisions of emphasis within any particular formal system.


-- Toby
to...@ugcs.caltech.edu

Bill Dubuque

unread,
Jun 8, 1996, 3:00:00 AM6/8/96
to

pa...@mtnmath.com (Paul Budnik) wrote to sci.math,sci.physics on 5/27/96:
:
: Of course you can change the name of `truth' or anything else.
: So what. What is central to mathematics is that certain propositions

: are deducible from certain formal systems. Without that formal
: mathematics is impossible. With it you have the objective truth
: of at least finite mathematics. ...
:
: There is general agreement that objective mathematical truth exists.

: The vast majority of mathematicians will agree
: on at least the objective truth of finite mathematics.

But there is no such thing as a comprehensive, self-contained discipline
of finite mathematics. Any such system suffers suffers from Goedel
incompleteness, and there are by now well-known "natural" independence
results which serve to show that such incompleteness results are not
just contrived uninteresting curiosities. There is simply no way of
avoiding infinity when it comes to truth.

Perhaps the best exposition along these lines is Steven Simpson's
paper "Unprovable theorems and fast-growing functions", Contemporary
Math. 65 1987, 359-394. I've appended an asciized version of this
paper below (the original is at ftp.math.psu.edu:/pub/simpson).
See also my earlier sci.math posts on Goedel incompleteness,
Goodstein's Theorem, Kruskal's Theorem, etc. which you may find via
the newsgroup archives at http://www.dejanews.com/.

-Bill

UNPROVABLE THEOREMS AND FAST-GROWING FUNCTIONS

Contemporary Math. 65 1987, 359-394

Stephen G. Simpson


#0. Introduction.
------------

This paper is a write-up of an expository talk which I have given to
mathematical audiences at a number of universities in the United States
and Europe over a period of several years. Preparation of this paper was
partially supported by NSF grant DMS-8317874.
The purpose of the talk is to exposit some recent results (1977 and
later) in which mathematical logic has impinged upon finite combinatorics.
Like most good research in mathematical logic, the results which I am
going to discuss had their origin in philosophical problems concerning the
foundations of mathematics. Specifically, the results discussed here were
inspired by the following philosophical question. Could there be such a
thing as a comprehensive, self-contained discipline of finite
combinatorial mathematics?
It is well known that a great deal of reasoning about finite
combinatorial structures can be carried out in a self-contained finitary
way, i.e. with no reference whatsoever to infinite sets or structures. I
have in mind whole branches of mathematics such as finite graph theory,
finite lattice theory, finite geometries, block designs, large parts of
finite group theory (excluding character theory, in which use is made of
the field of complex numbers), and large parts of number theory (including
the elementary parts but excluding analytical techniques such as contour
integrals). One could easily imagine comprehensive textbooks of these
subjects in which infinite sets are never mentioned, even tangentially.
All of the reasoning in such textbooks would be concerned exclusively with
finite sets and structures.
Consequently, there is a strong naive impression that the answer to
our above-mentioned philosophical question is "yes."
However, naive impressions can be misleading. I am going to discuss
three recent results from mathematical logic which point to an answer of
"no." Namely, I shall present three examples of combinatorial theorems
which are finitistic in their statements but not in their proofs. Each of
the three theorems is simple and elegant and refers only to finite
structures. Each of the three theorems has a simple and elegant proof.
The only trouble is that each of the proofs uses an infinite set at some
crucial point. Moreover, deep logical investigations have shown that the
infinite sets are in fact indispensable. Any proof of one of these finite
combinatorial theorems must involve a detour through the infinite. Thus,
in a strong relative sense, the three theorems are "unprovable" -- they
cannot be proved by means of the finite combinatorial considerations in
terms of which they are stated.
There is one somewhat technical point from mathematical logic which I
must now discuss. This involves the answer to a question which may have
already occurred to the reader. Namely, how might one establish that a
given finite combinatorial theorem cannot be given a finite combinatorial
proof? We know how to recognize a finite combinatorial proof when we see
one. But how could we hope to establish that there is no finite
combinatorial proof of a given theorem? In order to do this, it would be
first of all necessary to give a precise definition of "finite
combinatorial proof" or at least "finite combinatorial provability."
Such a definition has in fact been formulated and validated by
mathematical logicians. The definition is unquestionably adequate in that
it is precise, rigorous, and captures the intuitive idea of "provability
by finite combinatorial means." I do not want to give the detailed
definition here. I shall however say something about the definition.
(The details can be found in any good first-year graduate course or
textbook on the foundations of mathematics. For readers who are trained
in mathematical logic, let me remark that the notion I have in mind is
that of provability in the formal system PA of first-order Peano
arithmetic, also known as Z . Equivalently, one could consider
1
provability in the formal system consisting of Zermelo-Fraenkel set theory
with the axiom of infinity replaced by "All sets are finite.")
More specifically, there is a formal system consisting of appropriate
symbols, formulas, axioms, and rules of inference. For my purposes here I
shall refer to this formal system as T , the theory of finite sets and
fin
structures. The language of T contains variables which can denote
fin
arbitrary finite sets or finite structures of various sorts. For
instance, there could be variables ranging over natural numbers, finite
sequences of natural numbers, finite unordered sets, finite ordered sets,
finite sequences of finite unordered sets, etc. Quantification over such
finite structures is permitted. For instance, we can write down a formula
saying that for any finite structure of a certain sort there exists
another finite structure bearing a certain relationship to the first one,
etc. There are also logical connectives which have their usual meaning.
All valid methods of finite combinatorial inference are either formulated
explicitly as rules of the formal system T or can be derived from rules
fin
which are are so formulated. For instance, T includes the powerful
fin
principle of proof by induction on the cardinality of a finite
combinatorial structure of a given sort. This principle applies to any
property of finite structures which can be expressed by a formula of the
language of T .
fin
The point of T is that our intuitive but imprecise concept of
fin
"finite combinatorial provability" can be replaced by the precisely
defined notion of "provability in the formal system T ." This makes it
fin
possible for logicians to state and prove results to the effect that
particular finite combinatorial theorems cannot be given finite
combinatorial proofs.
The body of this paper is devoted to a fairly detailed discussion of
the above-mentioned three examples of "unprovable" theorems. The three
theorems involve respectively colorings of finite sets, embeddings of
finite trees, and exponential notation. I shall discuss the three
examples one at a time together with appropriate background material. In
each case I shall present the finite combinatorial theorem and at least a
sketch of its proof. I shall then give indications of why the theorem
cannot be proved in T , i.e. why any proof must involve a detour through
fin
the infinite. This will require some discussion of ordinal numbers and of
a hierarchy of fast-growing functions.
At the end of the paper there is an appendix describing subsequent,
related research. (The material in the appendix was not included in the
talk on which this paper is based.) For further details I refer to the
literature, including several of the other papers in this volume.

#1. A variant of Ramsey's Theorem.
- ------- -- -------- -------

Our first example of an "unprovable" theorem of finite combinatorics
concerns colorings of the k-element subsets of a given finite set. It is
closely related to a famous theorem of Ramsey [31]. I begin by recalling
the Finite Ramsey Theorem.
If X is any finite set, we write |X| = cardinality of X. We use i,

j, k, l, m and n to denote positive integers. If X is any set, we
k
denote by [X] the set of all k-element subsets of X.

1.1. FINITE RAMSEY THEOREM (Ramsey [31]). For all k, l and m,
k
there exists n so large that: if |X| = n and [X] = C U ... U C ,
1 l
k
then there exists Y <=X such that |Y| >=m and [Y] <=C for some i
i
<=l.

The reader who has no previous acquaintance with Ramsey's Theorem may
find the following sociological illustration useful. Suppose that k = l =
2. In this case the Finite Ramsey Theorem says the following. For any m,
there exists n so large that, in any gathering of n people, there will
always be a set of at least m of them who all know each other, or none of
whom know each other. Here C (respectively C ) is the set of unordered
1 2
pairs of people who know each other (respectively do not know each other).
For example, if m = 3 then we may take n = 6. This means that in any
gathering of 6 people, there will always be at least three who either all
know each other or all don't know each other.

In the statement of the Finite Ramsey Theorem, there is no harm in

assuming that the sets C , ..., C are pairwise disjoint. In this case,
1 l
k
the hypothesis [X] = C U ... U C is sometimes expressed by saying
1 l
that the k-element subsets of X have been colored with l colors C , ...,
1
C .
l
The Finite Ramsey Theorem is definitely not our promised example of
an "unprovable" theorem. Like the vast majority of finite combinatorial
theorems, the Finite Ramsey Theorem has a finitistic proof which is
straightforwardly formalizable in T . I shall not give the proof here,
fin
but in order to give the reader a feeling for this, let me mention that
the proof yields an easily described upper bound on the size of n
(expressed as a function of the other parameters k, l, and m). Let us
denote by f (k,l,m) the smallest n such that the conclusion of the
R
Finite Ramsey Theorem holds. The function f is known in the literature
R
as "the Ramsey function." One of the standard proofs of the Finite Ramsey
Theorem (namely the proof by "ramification") yields an upper bound of
approximately

m+2k
l \
. |
. > k+1
. |
l |
l /


for f (k,l,m). It is also known that f (k,l,m) is bounded below by a
R R
similar expression. Thus f (k,l,m) is of superexponential growth rate
R
in the parameter k. Such growth rates fall well within the strictures of
finite combinatorial provability.
We shall now state a finite combinatorial theorem which looks very
similar to the Finite Ramsey Theorem and may appear to be only slightly
stronger. However, we shall see that the logical properties and growth
rates which are associated to this Modified Finite Ramsey Theorem are
strikingly different from those of the original Finite Ramsey Theorem.

1.2. MODIFIED FINITE RAMSEY THEOREM. For all k, l and m there
k
exists n so large that: if X = {1,2,...,n} and if [X] = C U ...
1
U C , then there exists Y <=X such that |Y| >=m, |Y| >=min(Y), and
l
k
[Y] <=C for some i <=l.
i

The Modified Finite Ramsey Theorem can be restated as follows. A
finite nonempty set Y of positive integers is said to be relatively
large if |Y| >=min(Y), i.e. the cardinality of Y is at least as great
as the minimum element of Y. For example {3,6,7} is relatively large and
{6,7,9,11,15} is not relatively large. The Modified Finite Ramsey Theorem
is just the Finite Ramsey Theorem with the conclusion strengthened to say
that in addition Y is relatively large. This is a transparent
generalization of the Finite Ramsey Theorem.
Consider for instance the sociological illustration which was
discussed earlier. In those terms, the special case k = l = 2 of the
Modified Finite Ramsey Theorem says the following. Given m, there exists
n so large that the following holds. Consider an arbitrary set of n
th
people with the property that, for each j <=n, the j person in the set
is of the opinion that the number j is a very big number. Then there
will have to be a subset consisting of at least m people all of whom
know each other or all of whom do not know each other, such that in
addition the subset is very big according to the opinion of at least one
of the people in the subset.
We now come to the main point of this section. The Modified Finite
Ramsey Theorem was first formulated by Paris and Harrington in order to
enable them to state the following unprovability result.

1.3. THEOREM (Paris-Harrington [30]). The Modified Finite Ramsey
Theorem is not provable in T . In other words, the Modified Finite
fin
Ramsey Theorem cannot be proved by finite combinatorial means alone.

Thus the Modified Finite Ramsey Theorem is our first example of an
"unprovable" combinatorial theorem. As explained above, the unprovability
is only relative. The Modified Finite Ramsey Theorem is a valid
mathematical theorem which is proved by standard combinatorial methods.
The only unusual feature is that, although the statement of the theorem is
strictly finitistic, any proof of the theorem must necessarily involve the
infinite.
For interest's sake I shall now give the proof of the Modified Finite
Ramsey Theorem. The proof is simple but uses infinite sets. It is based
on the following well-known infinitistic version of Ramsey's Theorem.

1.4. INFINITE RAMSEY THEOREM (Ramsey [31]). Let X be an infinite
k
set. If [X] = C U ... U C , then there exists an infinite set Y <=
1 l
k
X such that [Y] <= C for some i <=l.
i

I shall deduce the Modified Finite Ramsey Theorem from the Infinite
Ramsey Theorem. Suppose that the Modified Finite Ramsey Theorem is false.
Then for some fixed k, l, and m, there is no n satisfying the
conclusion of the theorem. In other words, for each n there is a
k n n
coloring [{1,...,n}] = C U ... U C such that for no Y <={1,...,n}
1 l
k n
and i <=l do we have |Y| >=m, |Y| >=min(Y), [Y] <=C . Let X be the
i
set of all positive integers. For each i <=l let C be the set of all
i
n
(k+1)-element subsets of X of the form F U {n+1} with F @ C . Thus
i
k+1
[X] = C U ... U C and there is no finite Y <=X such that |Y| >=m
1 l
k+1
+ 1, |Y| >=min(Y) + 1, [Y] <=C for some i <=l. Hence there is no
i
k+1
infinite Y <=X such that [Y] <=C for some i <=l. This is in
i
contradiction to the Infinite Ramsey Theorem.
As indicated above, Paris and Harrington have shown that the Modified
Finite Ramsey Theorem cannot be given a finite combinatorial proof. In
order to give some indication of why this is so, we shall now discuss the
associated fast-growing function. Given k, l, and m, let f (k,l,m) be
PH
the smallest n which satisfies the conclusion of the Modified Finite
Ramsey Theorem. Recall that f (k,l,m) is the analogous function for the
R
original Finite Ramsey Theorem. One way to highlight the difference
between the Finite Ramsey Theorem and the Modified Finite Ramsey Theorem
is to compare the growth rates of f and f . We shall see that f grows
R PH PH
much faster than f .
R
We have already described the growth rate of f . In order to
R
describe the growth rate of f , it will be necessary to discuss a
PH
set-theoretic construction due to Cantor known as the ordinal numbers. We
do not assume that the reader has any previous acquaintance with ordinal
numbers.
The ordinal numbers are best understood as a transfinite extension of
the natural number sequence. The natural number sequence is generated by
starting with 0 and then repeatedly adding 1. If we imagine the natural
number sequence 0, 1, 2, ... as a completed totality, it is reasonable to
posit a new symbol w which is to be regarded as the limit of this
sequence. We can then then continue the sequence by resuming the repeated
addition of 1. Thus we obtain an extended sequence


0, 1, 2, ..., w, w+1, w+2, ....


The same idea can be used to generate further extensions. If we
imagine that the sequence w, w+1, w+2, ... has been similarly completed,
it is reasonable to denote the limit by some such expression as w+w or
w*2. We can then resume the process of adding 1's. This gives


0, 1, 2, ..., w, w+1, w+2, ..., w+w = w*2, w*2+1, w*2+2, ....


Denoting the limit of w*2+1, w*2+2, ..., w*2+n,... by w*3 and
continuing as before, we obtain


0, 1, 2, ..., w, w+1, w+2, ..., w*2, ..., w*3, ..., w*4, ....


2
If we now let w*w = w denote the limit of the subsequence w, w*2,
w*3, ..., w*n, ..., we can continue as before. Thus we have a process of
generating longer and longer extensions of the natural number sequence.
This can be pictured as follows.


0, 1, 2, ..., w, w+1, w+2, ..., w+w = w*2,

2 2 2 2
..., w*3, ..., w*4, ..., w*w = w , ..., w +w = w *2,

3 4 w
..., w , ..., w , ..., w ,


w
w w
w w
..., w , ..., w , ..., e = lim w , ...
0 n n


where by definition e is the limit of the sequence w , w , ...,
0 1 2
w , ... with
n

w \
.
. |
w = . > n.
n w |
w /


Two operations used repeatedly in this process are, first, the addition of
1, and second, the introduction of an expression intended to denote the
limit of a cofinal increasing sequence of previously introduced
expressions. Each of the expressions so generated is called an ordinal
number.
If a and b are ordinal numbers, we write a < b to indicate that
a comes before b in the process of generating the ordinal numbers.
Thus < is an extension of the usual ordering of the natural numbers.
The ordinal numbers which we have introduced fall into three mutually
exclusive classes. Each ordinal number is either 0, a successor ordinal
a+1, or a limit ordinal d = lim d[n] where
n

d[1] < d[2] < ... < d[n] < ....

Operations of ordinal addition and exponentiation are defined in such a
way that a+0 = a, a+(b+1) = (a+b)+1, a+d = lim(a+d[n]), a*1 = a,
n
0 a+1 a d d[n]
a*(n+1) = (a*n)+a, a*w = lim a*n, w = 1, w = w *w, w = lim w .
n n
The account of the ordinal numbers which we have given above is
rather sketchy and imprecise. A rigorous set-theoretic development would
proceed as follows. Let A be a linearly ordered set. We say that A is
well-ordered if every nonempty subset of A has a first element. Two
well-ordered sets which are isomorphic are said to have the same order
type. An ordinal number is then defined to be the order type of any
well-ordered set. This definition permits a convenient set-theoretical
discussion of arithmetical operations on the ordinal numbers. For
instance, if a and b are the order types of the well-ordered sets A
and B respectively, then a+b is defined to be the order type of the
well-ordered set which consists of A followed by B. Similarly a*b is
defined to be the order type of the well-ordered set which results upon
b
replacing each element of B by an isomorphic copy of A. Likewise a is
the order type of the set of functions from B into A with finite
support, ordered by last difference.
Each ordinal number less than e can be uniquely expressed as an
0
exponential polynomial in w. A fairly typical example of an ordinal
number less than e is
0

w+1
w w+1
w + w + w.


This representation in terms of exponential polynomials is known as the
Cantor Normal Form.
We shall now indicate how ordinal numbers can be used to describe the
growth rate of the function f which was defined above in terms of the
PH
Modified Finite Ramsey Theorem. To each ordinal a <=e we shall
0
associate a function f from positive integers to positive integers. As
a
explained above, the ordinal numbers <=e are generated from 0 by
0
repeatedly adding 1 and taking limits of increasing sequences d = lim
n
d[n] where d[1] < d[2] < ... < d[n] < .... We define


f (n) = n + 1,
0


n
f (n) = f f ...f (n) = f (n),
a+1 a a a a

\---v---/
n

f (n) = f (n) for limit ordinals d <=e .
d d[n] 0


Of course this definition presupposes that for each limit ordinal d <=e
0
we have specified an increasing sequence d[1], d[2], ..., d[n], ... such
that d = lim d[n]. We omit the details of exactly how this is done. See
n
for instance the paper by Buchholz and Wainer [7] in this volume.
In order to illustrate the properties of the functions f , a <=e ,
a 0
let us consider the first few of these functions. We have


f (n) = n + 1,
0


f (n) = n + 1 + 1 ... +1 = n + n = 2*n,
1
\-----v------/
n


n n
f (n) = 2*2*...*2*n = 2 *n ~ 2 ,
2
\---v---/
n


n
2 \ 2 \
* | * |
* | * |
f (n) ~ * > n ~ * > n,
3 2 | 2 |
2 / 2 /

and the growth rate of f (n) is already too fast to be described in
4
exponential notation. Note that f has approximately the same growth
3
rate as the Ramsey function f .
R
In general, it can be shown that, for all a < b <=e , f grows
0 b
much faster than f . In particular f is eventually dominated by f
a a b
in the sense that f (n) < f (n) for all sufficiently large n. Thus we
a b
have a hierarchy of fast-growing functions which are indexed by the
ordinal numbers <=e and are ordered by eventual domination. This
0
hierarchy is due to Wainer and is known as the Wainer hierarchy.
We are now in a position to describe the growth rate of the

Paris-Harrington function f which is associated with the Modified
PH
Finite Ramsey Theorem. Namely, it has been shown that the growth rate of
f is approximately the same as that of f . (This result is due to
PH e
0
Paris and Harrington [30] using results of Wainer. See also the papers by
Ketonen-Solovay [21] and Buchholz-Wainer [7] and #6.3 of the monograph by

Graham, Rothschild and Spencer [19].)
In particular f eventually dominates f for all a < e . By
PH a 0
contrast f is eventually dominated by f . Thus f grows much, much
R 4 PH
faster than f .
R
The remarkable aspect of all this is that there is really no way to
describe the growth rate of f except by recourse to ordinal numbers.
PH
Logicians have shown that the ordinal numbers up to and including e are
0
in a sense implicit or unavoidable in the Modified Finite Ramsey Theorem.
This is closely related to the fact that the Modified Finite Ramsey
Theorem cannot be proved in T . (An important classical theorem of
fin
mathematical logic is that e is the limit of the ordinal numbers which
0
are in a certain sense accessible within T .)
fin

#2. Embeddability of finite trees.
------------- -- ------ -----

Our second example of an "unprovable" finite combinatorial theorem
will be concerned with embeddability of finite trees. We begin by
presenting the relevant definitions.
A finite tree is a finite, partially ordered set T such that: (i) T
has a minimum element (called the root of T); (ii) for each x @ T, the
predecessors of x in T are linearly ordered.
Thus a finite tree can be pictured as a diagram which looks like
this:


/ * * *
|
* * * * *
|
* * *
T <
* *
|
| *

\ * <- root of T.


Note that any two elements of a finite tree have a greatest lower bound.
The greatest lower bound of x and y is denoted x /\y.
Let T ant T' be finite trees. An embedding of T into T' is a
one-to-one mapping o: T ->T' such that for all x, y @ T, o(x /\y) =
o(x) /\o(y). (It follows easily that x <=y if and only if o(x) <=
o(y).) We say that T is embeddable into T' if there exists an embedding
of T into T'. We write T\->T' to mean that T is embeddable into T'.
An important result about embeddability of finite trees, due to J. B.
Kruskal [24], is the following. There is no infinite set of pairwise non-
embeddable finite trees. We shall use the following equivalent
formulation of this result.

2.1. KRUSKAL'S THEOREM. For any infinite sequence of finite trees
T , T , ..., T , ..., there exist indices i and j such that i < j and T
1 2 n i
is embeddable into T .
j

Kruskal's Theorem itself is not our promised example of an
"unprovable" finite combinatorial theorem about finite trees. Indeed, the
statement of Kruskal's Theorem involves an infinite sequence and is
therefore definitely not finitary. (Kruskal's Theorem cannot even be
stated in the language of T .) However H. Friedman has shown that
fin
Kruskal's Theorem has a certain consequence which is finitary in its
statement, but not in its proof. This consequence, known as Friedman's
Finite Form, is our promised example.

2.2. FRIEDMAN'S FINITE FORM OF KRUSKAL'S THEOREM. For any positive
integer c, there exists a positive integer n = n which is so large
c
that the following holds. If T , T , ..., T is any finite sequence of
1 2 n
finite trees with |T | <=c*i for all i <=n, then there exist indices i
i
and j such that i < j <=n and T \->T .
i j

Clearly Friedman's Finite Form is a finitary statement, referring
only to finite sets and structures. Hence Friedman's Finite Form can be
stated within T . The following unprovability theorem is due to
fin
Friedman (198l, unpublished). For a full exposition of the theorem and
its proof, see Simpson [37].

2.3. THEOREM. Friedman's Finite Form of Kruskal's Theorem is not
provable in T . In other words, Friedman's Finite Form of Kruskal's
fin
Theorem cannot be proved by finite combinatorial methods alone.

In this result, the bound of c*i can be strengthened considerably.
See Loebl-Matousek [26] (this volume) and Smith [39].
Let me now indicate how Friedman's Finite Form can be derived from
Kruskal's Theorem. Suppose for a contradiction that Friedman's Finite

Form is false. Let V be the set of all finite sequences <T ,T ,...,T >
c 1 2 n
such that |T | <=c*i for all i <=n, and T \->T for no i and j with i <
i i j
j <=n. Then for some c, V is infinite. If we partially order this V
c c
by extension, we see that it is an infinite, finitely branching tree.
"
Hence by Konig's Infinity Lemma [23], it has an infinite path. In other
words, there is an infinite sequence of finite trees <T ,T ,...,T ,...>
1 2 n
such that |T | <=c*i for all i, and T \->T for no i and j with i < j.
i i j
This contradicts Kruskal's Theorem.
"
In the above derivation, it is interesting to note how Konig's Lemma
"
was applied in a non-traditional way. Traditionally, Konig's Lemma is
used to derive infinitistic theorems from finitistic ones. For example
"
Konig's Lemma can be used to deduce the infinite form of Dilworth's
"
Theorem from the finite form. Indeed, the title of Konig's paper [23]
translates to: "On a method of drawing conclusions from the finite to the
"
infinite." This means that Konig must have viewed his lemma as a one-way
pipeline from the finite to the infinite. However, there is no reason not
to reverse the flow and derive finitistic statements from infinitistic
ones. Such was the procedure of the previous paragraph.
By Theorem 2.3, Friedman's Finite Form cannot be proved without some
use of infinite sets. Actually Friedman obtained a much stronger result.
Namely, neither Kruskal's Theorem nor Friedman's Finite Form can be proved
without at least some weak use of uncountable sets. (The precise result
is that Friedman's Finite Form cannot be proved in the formal system ATR .
0
For a proof and references to the literature, see Simpson [37].) Thus
Friedman's Finite Form of Kruskal's Theorem is, so to speak, even more
unprovable than the Modified Finite Ramsey Theorem!
This higher degree of unprovability is reflected in the associated
growth rate. Using the notation n as in the statement of Friedman's
c
Finite Form, it is natural to ask the following question. How fast does
n grow as a function of c? The answer is that n grows faster than
c c
the Wainer function f where G is a certain ordinal number which is
G 0
0
much, much larger than e . In order to define G , let OR be the class
0 0
of all ordinal numbers and consider the following hierarchy of continuous
increasing functions h : OR ->OR where a ranges over OR. Given b @ OR
a
b th
we define h (b) = w and, for all a > 0, h (b) = the b simultaneous
0 a
fixed point of the functions h , a' < a. Then G is defined to be the
a' 0
least ordinal g > 0 such that h (b) < g for all a, b < g. This is a
a
very large ordinal number. By comparison e = h (0) is miniscule. The
0 1
Wainer hierarchy can be extended through the ordinals <=G . It can then
0
be shown that f is eventually dominated by the Friedman function n (as
G c
0
a function of c). Indeed, it can be shown that the Friedman function
eventually dominates f for certain ordinals a which are somewhat
a
larger than G .
0
What about n for small values of c? This question points to
c
another remarkable aspect of Friedman's Finite Form. Consider for
instance c = 12. Please bear in mind that n is a single, specific
12
positive integer. (Namely, it is the smallest positive integer n with
the property that, in any sequence of finite trees T , T , ..., T with
1 2 n
|T | <=12*i for all i <=n, there is at least one embedding T \->T , i <
i i j
j <=n.) Hence the existence of n can be proved in T . However, it
12 fin
turns out that n is extremely large. It can be shown that n is much
12 12
larger than

2 \
* |
* |
* > 1000
2 |
2 / .


In fact, Friedman has shown that any proof of the existence of n within
12
T would require more than
fin
2 \
* |
* |
* > 1000
2 |
2 /

sheets of paper! Hence there is no hope for a manageable proof of the
existence of n within T . (For details see Smith [39].) This version
12 fin
of Theorem 2.3 is interesting from the viewpoint of ultrafinitism.

#3. Exponential notation.
----------- --------

Our third example of an "unprovable" finitistic theorem is number-
theoretical rather than combinatorial. The theorem is concerned with one
of the most elementary number-theoretic concepts, namely base b
notation. (The special case b = 10 is the usual decimal notation.)
Let b be a positive integer >=2. Any nonnegative integer n can
be written uniquely in the form

n n
1 k
n = b *c + ... + b *c
1 k


where k >=0, n > ... > n >=0, and 0 < c < b for i = 1, ..., k.
l k i
This is known as the base b representation of n. For example the base 2
representation of 266 is

8 3
266 = 2 + 2 + 2.


We may extend this by writing each of the exponents n , ..., n in
1 k
base b notation, then doing the same for each of the exponents in the
resulting representations, ... until the process stops. This yields the
so-called complete base b representation of n. For example the complete
base 2 representation of 266 is obtained as:


8 3
266 = 2 + 2 + 2

3
2 2+1
= 2 + 2 + 2

2+1
2 2+1
= 2 + 2 + 2.


We now present the definitions which are needed to state the third
"unprovable" theorem. Let R (n) be the nonnegative integer which
b
results if we take the complete base b representation of n and then
replace each b by b+1. Thus R is a "base change" function. For
b
example

3+1
3 3+1
R (266) = 3 + 3 + 3.
2


For each nonnegative integer n we recursively define a sequence of
nonnegative integers (n) , (n) , ..., (n) , ... by
0 1 k


(n) = n,
0


/ R ((n) ) - 1 if (n) > 0,
| k+2 k k
|
(n) = <
k+1 |
|
\ 0 if (n) = 0,
k


for all k >=0. This is known as the Goodstein sequence beginning with
n. For example the Goodstein sequence beginning with 266 is

2+1
2 2+1
(266) = 266 = 2 + 2 + 2,
0


3+1
3 3+1
(266) = 3 + 3 + 3 - 1
1

3+1
3 3+1
= 3 + 3 + 2,


4+1
4 4+1
(266) = 4 + 4 + 1,
2


5+1
5 5+1
(266) = 5 + 5 ,
3


6+1
6 6+1
(266) = 6 + 6 - 1
4

6+1
6 6 5
= 6 + 6 *5 + 6 *5 + ... + 6*5 + 5,


7+1
7 7 5
(266) = 7 + 7 *5 + 7 *5 + ... + 7*5 + 4,
5

.
.
.
.


We can see that the Goodstein sequence beginning with 266 appears
initially to increase very rapidly. Nevertheless, despite appearances, it
can be proved that this sequence converges to 0. In other words, (266) =
k
0 for all sufficiently large k. This surprising conclusion is due to
Goodstein [18] who actually proved the same result for all Goodstein
sequences:

3.1. GOODSTEIN'S THEOREM. For all n there exists k such that (n) =
k
0. In other words, every Goodstein sequence converges to 0.

Obviously the statement of Goodstein's Theorem is totally finitistic,
referring only to natural numbers and elementary operations on them. On
the other hand, we shall see that Goodstein's proof of his theorem is
definitely not finitistic. Moreover Kirby and Paris [22] have shown that
no finitistic proof of the theorem is possible. Thus Goodstein's Theorem
will turn out to be our promised example of an "unprovable" number-
theoretic theorem.
I shall now sketch the proof of Goodstein's Theorem. The proof is
infinitistic in that it makes direct use of the ordinal numbers up to e .
0
w
Given b >=2 and any natural number n, we denote by R (n) the result
b
of taking the complete base b representation of n and replacing each
w
occurrence of b by w. Thus R (n) is the Cantor Normal Form of some
b
ordinal number less than e . For example
0

w+1
w w w+1
R (266) = w + w + w.
2


w w w
Note that R (m) < R (n) if and only if m < n. Also R (0) = 0, and
b b b
w w
R (n) = R R (n). Now given n, consider the Goodstein sequence
b b+1 b

(n) , (n) , ..., (n) , (n) , ...
0 1 k k+1

and the corresponding sequence of ordinal numbers

w w w w
R ((n) ), R ((n) ), ..., R ((n) ), R ((n) ), ....
2 0 3 1 k+2 k k+3 k+1


For all k such that (n) > 0, we have
k

w w
R ((n) ) = R (R ((n) )-1)
k+3 k+1 k+3 k+2 k

w
< R (R ((n) )
k+3 k+2 k

w
= R ((n) ).
k+2 k


Since there is no infinite descending sequence of ordinal numbers, it
follows that (n) = 0 for some k. This completes the proof.
k
We now state the following unprovability theorem due to Kirby and
Paris.

3.2. THEOREM (Kirby-Paris [22]). Goodstein's Theorem 3.1 is not
provable in T . In other words, Goodstein's Theorem cannot be proved
fin
by finite combinatorial methods alone.

This gives our third example of an "unprovable" finite combinatorial
theorem. As in the case of our other two examples, the "unprovability" is
reflected in the very rapid growth rate of a certain function. For all n
let f (n) be the smallest k such that (n) = 0. Kirby and Paris have
G k
shown that the Goodstein function f grows approximately as fast as
G

f . In particular f eventually dominates f for all a < e . (An
e G a 0
0
extremely elegant, computational proof of these results has been given by
Cichon [10]. See also Buchholz and Wainer [7].)

#4. Appendix
--------

In this section I shall discuss various developments which are
related to the results of ##1-3.

* * *

Ramsey's Theorem is only the beginning of a highly developed field of
research. For an excellent but by now somewhat dated survey of this
rapidly changing field, see the monograph Ramsey Theory by Graham,
Rothschild, and Spencer [19]. I want to discuss some results in Ramsey
Theory which have some bearing on "unprovable" finite combinatorial
theorems.
One of the best-known generalizations of Ramsey's Theorem is the
" '
Canonical Ramsey Theorem due to Erdos and Rado (see # 5.5 of [19]).
Recently Kanamori and McAloon [20] have used the Canonical Ramsey Theorem
to give an elegant alternative to the results of Paris and Harrington.
k
For any set X of positive integers, a coloring f: [X] ->X is said to be
k
regressive if f(F) < min(F) for all F @ [X] such that min(F) > 1.
The Kanamori-McAloon version of the Modified Finite Ramsey Theorem reads
as follows.

4.1. THEOREM. For all k and m there exists n so large that, for all
k
regressive f: [X] ->X = {1,2,...,n}, there exists Y <=X such that |Y| >=m
k
and, for F @ [Y] , f(F) depends only on min(F).

Kanamori and McAloon have shown that this statement has the same
logical and growth-rate properties as the Paris-Harrington statement 1.2.
In particular, the Kanamori-McAloon statement is not provable in T .
fin
The advantage of this approach is that the proof of the Kanamori-
McAloon unprovability result is much simpler than the proof of the
corresponding Theorem 1.3 of Paris and Harrington. To those who wish to
present a version of the Paris-Harrington result in a first-year graduate
logic course but do not want to spend too much time on the combinatorial
details, I recommend the first part of Kanamori-McAloon [20].

* * *

In recent years there have appeared a number of infinitistic Ramsey-
type theorems which are topological in nature. The prototypical result of
this kind is the following theorem due to Galvin and Prikry. For any
oo
infinite set X, let [X] be the set of all infinite subsets of X. We
oo
regard [X] as a topological space with basic open sets of the form

X oo
O = {Y @ [X] : F <=Y and G /\Y = O}
FG

where F and G are disjoint finite subsets of X.


4.2. GALVIN-PRIKRY THEOREM ([17]). Let X be an infinite set. If

oo oo
[X] = C U ... U C with each C closed, then there exists Y @ [X] such
1 l i
oo
that [Y] <=C for some i.
i

oo
(The hypothesis that each C be a closed subset of [X] can be
i
weakened considerably. For instance, Galvin and Prikry proved their
theorem with "closed" replaced by "Borel." The sharpest result in this
direction is due to Ellentuck. For details see [9].)
Friedman, McAloon and Simpson [15] have shown that, despite its
infinitary nature, the Galvin-Prikry Theorem 4.2 is relevant to finite
combinatorics. Indeed, Theorem 4.2 can be converted into a finitary
statement, just as Paris and Harrington converted the Infinite Ramsey
Theorem 1.4 into the Modified Finite Ramsey Theorem 1.2. (More precisely,
the miniaturization procedure of [15] is modeled on the original idea of
Paris [29] rather than on Paris-Harrington [30].) This finitary statement
due to Friedman, McAloon and Simpson turns out to be unprovable not only
in T but also in the much stronger theory ATR which was mentioned near
fin 0
the end of #2. In addition, the associated fast-growing function grows at
about the same rate as f . (These results are sharp in the sense that
G
0
ATR is the weakest natural subsystem of second order arithmetic in which
0
4.2 is provable. For details see [15].)
As mentioned above, the Galvin-Prikry Theorem has served as the
prototype for a large number of extensions and generalizations of the
Infinite Ramsey Theorem. A comprehensive survey of this area can be found
in [9]. Let me describe only one of these results, due to Carlson and
Simpson [8]. For any set X, a partition of X is a pairwise disjoint
collection of nonempty subsets of X whose union is all of X. If X is
oo
infinite, let (X) be the set of all infinite partitions of X. For any Y
oo
@ (X) , put

oo oo
(Y) = {Z @ (X) : Z is coarser than Y}.

oo
We regard (X) as a topological space with basic open sets of the form

X oo
O = {Y @ (X) : Y|F = P}
P

where: P is a partition of a finite subset F of X; Y|F is the restriction
of Y to F, i.e. Y|F = {y /\F: y @ Y and y /\F !=O}.

4.3. DUAL GALVIN-PRIKRY THEOREM (Carlson and Simpson [8]). Let X
oo
be an infinite set. If (X) = C U ... U C with each C closed, then
1 l i
oo oo
there exists Y @ (X) such that (Y) <=C for some i.
i

(As in the case of Theorem 4.2, the hypothesis "C closed" in Theorem
i
4.3 can be weakened considerably. For details see [8].)
In view of the results of Friedman, McAloon and Simpson which were
discussed above, it is natural to ask whether Theorem 4.3 has any finite
combinatorial consequences which are unprovable in relatively strong
subsystems of second order arithmetic. (This was my original motivation
for formulating the Dual Galvin-Prikry Theorem. See #7 of [8].) Almost
nothing is known about this question. It is known that Theorem 4.2 can be
deduced from Theorem 4.3 as an easy corollary. Hence by [15] the growth
rate associated to Theorem 4.3 is at least as great as f . However, it
G
0
is not known to be any greater. Similarly, almost nothing is known about
the following, closely related, axiomatic question. What set existence
axioms are needed to prove Theorem 4.3? It follows from [15] that at
least the axioms of ATR are needed, and it is reasonable to conjecture
0
that much more is needed, but this conjecture remains open. Analogous
remarks could be made about the other generalizations and extensions of
the Galvin-Prikry Theorem which are discussed in [9].
(Note added January 31, 1986. Recent work of Blass, Hirst and
---- ----- ------- --- ----
Simpson [4] yields some non-trivial upper bounds on the set existence
axioms and growth rates associated with Hindman's Theorem and the Dual
Galvin-Prikry Theorem. However, more work remains to be done.)

* * *

A large part of modern Ramsey Theory is concerned with the following
theorem of van der Waerden. Let N be the set of nonnegative integers. A
finite arithmetical progression is a subset of N of the form

{a, a + d, a + 2*d, ..., a + k*d}.


4.4. VAN DER WAERDEN'S THEOREM. If N = C U ... U C then, for
1 l
some i, C contains arbitrarily long finite arithmetical progressions.
i

A equivalent statement of Van der Waerden's Theorem reads as follows.
For any k and l, there exists n so large that, if {1, 2, ..., n} = C U
1
... U C , then for some i, C contains an arithmetical progression of
l i
length k. Define W (k) to be the smallest n such that this conclusion
l
holds. In view of #1, it is natural to ask: What is the growth rate of
W (k) as a function of l and k?
l
The currently known upper bound for W (k) is something like the
l
Ackermann function, i.e. the function f of #1. The problem of improving
w
this bound is open, famous, and apparently difficult. (See for instance
the paper by Brackin [5] in this volume.)

It is unknown whether W (k) is eventually dominated by f (k) for any
2 n
fixed n. Should this turn out not to be the case, an interesting
unprovability result would follow. Namely, it would follow that Van der
+
Waerden's Theorem is not provable in the formal systems WKL and WKL of
0 0
+
[38]. This would be interesting because WKL and WKL are known to be
0 0
adequate for the development of a large part of ordinary mathematics.
Thus we would be able to conclude that any proof of Van der Waerden's
Theorem must use strong methods which go beyond those that are usually
needed in mathematics.
For instance, the standard combinatorial proof of Van der Waerden's
Theorem uses a strong method known as double induction. For each fixed k,
the existence of W (k) is proved by induction on l assuming the existence
l
of W (k') for all k' < k and all l'. This type of induction is
l'
+
unavailable in WKL and WKL and is rarely needed in mathematical
0 0
practice. The interesting problem is whether all proofs of Van der
Waerden's Theorem must necessarily involve such unusual methods.
Similar remarks apply to the theorems of Hales-Jewett, Graham-Leeb-
'
Rothschild, and Szemeredi. (For an exposition of these theorems, see
Chapter 2 of [19].) Each of these theorems is a generalization of Van der
Waerden's Theorem. In each case, the corresponding growth rate is not
known to be eventually dominated by f for any finite n, and it would be
n
very interesting if it were not so dominated.

* * *

Furstenberg and his coworkers (see [16]) have developed a beautiful
approach to Van der Waerden's Theorem and related results. Namely, they
have shown that such results can often be deduced from general theorems of
topological dynamics and ergodic theory. (In some cases, the process can
be reversed so that the combinatorial theorems are in fact interchangable
with their dynamical counterparts.) Thus Furstenberg and his coworkers
have provided a sort of analytical or perhaps geometrical context for this
branch of combinatorics. See for instance the paper by Bergelson [3] in
this volume.
From our viewpoint here, Furstenberg's approach is intriguing because
it introduces, into finite combinatorics, methods which appear to be
highly abstract and set-theoretical. For instance, Furstenberg's proof of
'
Szemeredi's Theorem (Chapter 7 of [16]) uses a complicated structural
analysis of measure-preserving transformations. This seems to require a
heavy dose of transfinite induction. Similarly, the Furstenberg-Weiss
proof of Hindman's Theorem (Chapter 8 of [16]) involves Zorn's Lemma
applied to a non-metrizable Tychonoff product space. (See the proof of
Lemma 8.4 in [16].) Developments such as these raise the question of
whether the highly abstract methods are ever really needed in order to
prove particular combinatorial theorems. If they are in fact needed, then
this would probably lead to some results concerning unprovability and
fast-growing functions. Unfortunately, at the present time, there is
nothing but speculation here.
(Note added January 31, 1986. Recent research of Blass, Hirst and
---- ----- ------- --- ----
Simpson [4] seems to indicate that highly abstract, set-theoretical
methods are less essential for the study of dynamical systems than they
might at first appear to be. For instance, the main results of Chapter 8
of [16], including the Auslander-Ellis Theorem 8.7, can be proved in a
+
relatively weak subsystem of second order arithmetic known as ACA .
0
However, the question of which set existence axioms are needed to prove
'
Szemeredi's Theorem is still open.)

* * *

Kruskal's Theorem is only one highlight of a very interesting branch
of combinatorics known as well-quasi-ordering theory (wqo theory for
short). The general notions involved are as follows. A quasi-ordering is
a set Q (usually infinite) together with a binary relation <=which is
reflexive and transitive on Q. A well-quasi-ordering is a quasi-ordering
Q with the property that, for every infinite sequence <Q : n @ N> of
n
elements of Q, there exist indices i and j such that i < j and Q <=Q .
i j
Thus for example Kruskal's Theorem may be expressed by saying that the set
V of all finite trees is wqo under embeddability.
There are a number of general theorems which assert that various
methods of constructing new quasi-orderings from old ones preserve the wqo
property. For instance, any well-quasi-ordered sum of wqo's is wqo; any
finite product of wqo's is wqo; the set of finite sequences of elements of
a wqo (appropriately quasi-ordered) is wqo.
Some of the deeper results of wqo theory make use of a refined notion
due to Nash-Williams known as better-quasi-ordering (abbreviated bqo).
Better-quasi-orderings are more difficult to define and to work with than
well-quasi-orderings, but they compensate by having better infinitary
preservation properties. For example, the set of transfinite sequences of
elements of a bqo (appropriately quasi-ordered) is bqo. This is not true
for wqo's.
A simplified exposition of bqo theory is due to Simpson [36]. For a
further simplification, see the contribution to this volume by van
Engelen, Miller, and Steel [12]. Some recent results of bqo theory are
v v
discussed in the paper by Nesetril and Thomas [27] in this volume. The
following theorem of Laver [25] is one of the triumphs of bqo theory.

4.5. LAVER'S THEOREM. The set L of all countable linear orderings
is well-quasi-ordered under embeddability.

A consequence of Laver's Theorem is that there is no infinite family
of countable linear orderings which are pairwise non-embeddable in each
other.
It seems reasonable to speculate that Laver's Theorem and other
results of bqo theory could give rise to finite combinatorial theorems
which are unprovable in T or in stronger systems. However, at the
fin
present time, there are no solid results in this direction.

* * *

One of the most difficult and impressive results of wqo theory is a
recent theorem due to Robertson and Seymour. Let G be a finite graph. A
minor of G is any graph which is obtained from G by deleting and
contracting edges. Write H <= G if and only if H is isomorphic to a minor
m
of a subgraph of G.

4.6. ROBERTSON-SEYMOUR THEOREM. The set G of all finite graphs is
well-quasi-ordered under <=.
m

One exciting consequence of this theorem is a topological result
which used to be known as Wagner's Conjecture: For any 2-manifold, M,
there are only finitely many finite graphs which are not embeddable into M
and are minimal with this property. This generalizes the famous theorem
of Kuratowski to the effect that every non-planar graph contains a copy of
either K or K .
5 3,3
The proof of the Robertson-Seymour Theorem 4.6 is very complicated
and will extend over many journal articles. See for example [33], [34].
In view of Friedman's work as described in #2, it is natural to ask
whether the Robertson-Seymour Theorem leads to any finite combinatorial
consequences which are unprovable in T . This is closely tied to
fin
the question of which set existence axioms are needed to prove the
Robertson-Seymour Theorem.
Inspection of the Robertson-Seymour proof shows that it consists
largely of constructive decomposition methods which are formalizable in
T . However, Robertson and Seymour also use Kruskal's Theorem in at
fin
least one crucial point. As explained in #2, Kruskal's Theorem is
inherently highly nonconstructive. It is an open question whether the
Robertson-Seymour Theorem is equally nonconstructive. For instance, is
the Robertson-Seymour Theorem provable in ATR ? Friedman's work gives us
0
reason to suspect that the Robertson-Seymour Theorem is not provable in
ATR and perhaps not even in some much stronger systems. However, these
0
suspicions have not yet been verified.

* * *

As explained in #2, Friedman has shown that Kruskal's Theorem and its
finite form are unprovable in a certain fairly strong subsystem of second
order arithmetic which is related to the ordinal G . This result is based
0
on a direct correlation between finite trees and a notation system for the
ordinals less than G .
0
To illustrate the correlation, let us write h(a,b) = h (b) and consider
a
the expression


h(h(h(0,0),0)+h(0,0),h(0,h(0,0)+h(0,0)))


which is a notation for one particular ordinal which is less than G . The
0
above expression may be written in tree form as


0 0 0 0 0 0


h 0 0 0 h h
\ / \ /

h h 0 +
\ /

+ h


h .

This is a structured finite tree whose nodes are labeled with the symbols
h, +, 0. By means of some coding tricks, one can get rid of the labels
and associate to each ordinal a < G a finite, unlabeled, unstructured
0
tree T of the kind considered in Kruskal's Theorem. This can be done in
a
such a way that T \->T implies a <=b. Thus Kruskal's Theorem implies
a b
that the ordinals less than G are well-ordered. This close relationship
0
between finite trees and ordinal notations is one of the key ideas in
Friedman's proof of Theorem 2.3 (exposited in [37]).

* * *

In addition, Friedman has found another well-quasi-ordering result
which generalizes Kruskal's Theorem and gives rise to certain ordinal
numbers which are much, much larger than G . The generalization is
0
phrased in terms of finite trees whose nodes are labeled with positive
integers.
To be precise, if n is a positive integer, we define an n-labeled
finite tree to be an ordered pair (T,l) where T is a finite tree and l:T ->
{1,2,...,n}. Thus l is a labeling function which assigns to each node x @
T its label l(x) @ {1,2,...,n}. If (T ,l ) and (T ,l ) are two n-labeled
1 1 2 2
finite trees, we say that (T ,l ) is gap-embeddable into (T ,l ) if there
1 1 2 2
exists an embedding f: T ->T with the following additional properties:
1 2
(i) l (x) = l (f(x)) for all x @ T ; (ii) if y @ T is an immediate
1 2 1 1
successor of x @ T , then l (z) >=l (f(y)) for all z @ T in the interval
1 2 2 2
f(x) < z < f(y).
Friedman's generalization of Kruskal's Theorem reads as follows.

4.7. THEOREM (Friedman). For each positive integer n, the n-labeled
finite trees are well-quasi-ordered under gap-embeddability.

This theorem has been used by Robertson and Seymour in their proof of
the Robertson-Seymour Theorem 4.6. (See [14].)
Friedman (unpublished) has shown that Theorem 4.2 and the associated
finite form (analogous to Theorem 2.2) are not provable in a certain
1
well-known formal system P -CA which is much, much stronger than ATR .
1 0 0
In addition, the associated fast-growing functions grow much, much faster
than f . For an exposition of these results, see Simpson [37].
G
0

* * *

There is a certain obvious definitional resemblance between
Friedman's n-labeled finite trees and Takeuti's ordinal diagrams of finite
order [40]. It can also be shown that these two notions give rise to the
same class of ordinal numbers. However, the two notions are conceptually
quite distinct. For a detailed comparison, see Takeuti [41]. The biggest
difference is that each n-labelled finite tree has only finitely many
predecessors, while ordinal diagrams are well-ordered and typically have
infinitely many predecessors. In addition, the ordering relation for
ordinal diagrams is much more difficult to describe than the gap-
embeddability relation for finite n-labeled trees.
In their contribution to this volume, Okada and Takeuti [28] present
a new notion of quasi-ordinal diagrams which is conceptually intermediate
between ordinal diagrams and n-labeled finite trees. They show that the
new notion gives rise to even wider classes of ordinal numbers.
In this context, it is appropriate to mention another well-quasi-
ordering result, due to Klaus Leeb (unpublished).

4.8. THEOREM (Leeb). For each positive integer n, the set of
n-jungles is well-quasi-ordered under embeddability by means of n-jungle
morphisms.

Unfortunately, the definition of n-jungles and their morphisms is
category-theoretic and much too complex to be given here. (The definition
was sketched by Leeb in his talk at this conference and in some
handwritten notes which were circulated after the conference.)
It is tempting to conjecture that Leeb's n-jungles give rise to the
same ordinals and fast-growing functions which come out of Friedman's
n-labeled finite trees. However, this conjecture has not yet been
verified.

* * *

"
Schutte and Simpson [35] have considered the problem of what happens
when Theorem 4.7 (Friedman's generalization of Kruskal's Theorem) is
restricted to the case of finite trees with no ramification, i.e. finite
linearly ordered sets. The resulting combinatorial statement reads as
follows.

4.9. THEOREM. For each n let S be the set of finite sequences of
n
elements of {1,2,...,n}. Then S is well-quasi-ordered under gap-
n
embeddability.

"
Schutte and Simpson [35] show that Theorem 4.9 for each fixed n is
provable in ACA but that the full statement, for all n, is not provable
0
in ACA . They also present a Friedman-style finite form of Theorem 4.9
0
(analogous to Theorem 2.2) which is not provable in T . This gives yet
fin
another example of a simply stated, "unprovable," finite combinatorial
theorem.

* * *

Abrusci, Girard and Van de Wiele [2] have used the Goodstein-
Kirby-Paris ideas to obtain finite combinatorial theorems which are
unprovable in formal systems which are somewhat stronger than T .
fin
*
Consider for instance the system T which consists of T plus a truth
fin fin
predicate for T . The associated ordinal is e . Abrusci et al. have
fin e
0
shown that there is a notion of generalized Goodstein sequence which is
slightly more complicated than Goodstein's and whose convergence to zero
*
is unprovable in T . The associated growth rate is approximately f .
fin e
e
0
The work of Abrusci et al. is based on Girard's notion of dilator
which is a category-theoretic approach to ordinal notations. Abrusci et
al. have exhibited a general scheme whereby any notation system generated
by dilators gives rise to an associated class of generalized Goodstein
sequences. For an introduction to this work, see the articles by Abrusci
[1] and Abrusci, Girard and Van de Wiele [2] in this volume.

* * *

The paper of Kirby and Paris [22] on Goodstein sequences contains
another interesting result, concerning the hydra game.
A hydra is a finite tree in the sense of #2 above. The hydra game is
a certain infinite game with perfect information, taking the form of a
battle between Hercules and a given hydra T . For each n >=1, the nth
1
move of the game is as follows. First, Hercules chooses a maximal element
y of the finite tree T and deletes it from T , thus forming a new finite
n n n
- - -
tree T . Then T sprouts n replicas of that part of T which lies above
n n n

y 's immediate predecessor, x . The replicas are attached to the
n n
immediate predecessor of x . The resulting finite tree is called T .
n n+1
-
(If x is the root of T , then we set T = T .) Hercules wins if and
n n n+1 n
only if, for some n, T consists only of its root.
n
Kirby and Paris have proved the following.

4.10. THEOREM (Kirby-Paris [22]). (i) Every strategy for Hercules
is a winning strategy. (ii) The fact that every recursive strategy for
Hercules is a winning strategy cannot be proved in T .
fin

Thus in 4.10(ii) we have another example of an "unprovable"
finitistic theorem.

* * *

The idea of hydra games has been used by Buchholz [6] to give a
finite combinatorial theorem which is not provable in a certain formal
system which is stronger than any that has been mentioned previously in
this paper. The hydras of Buchholz are finite trees whose nodes are
labeled with elements of the set {0,1,2,...,w}. The rules of the Buchholz
hydra game are somewhat complicated to describe. Buchholz has shown that,
in his hydra game, every strategy for Hercules is a winning strategy, and
1
that this fact cannot be proved in the formal system P -CA + BI. By
1 0
considering certain very specific strategies, Buchholz obtains a finite
1
combinatorial theorem which is also not provable in P -CA + BI.
1 0
For students of proof theory, the Buchholz paper [6] is especially
to be recommended because of its novel, elegant treatment of cut-
elimination for the theories ID , n <=w. The Buchholz approach is also
n
the basis of the Buchholz-Wainer paper [7] (this volume). The latter
paper provides a streamlined approach to the proof-theoretic ordinals and
provably recursive functions of first order Peano arithmetic.

* * *

No discussion of "unprovable" finite combinatorial theorems would be
complete without mention of the very recent work of Harvey Friedman [13],
[14]. Friedman has obtained some striking examples of finite combinatorial
theorems which are "true" but not provable by means of any commonly accepted
modes of mathematical reasoning. In order to prove these theorems, it is
necessary to use an axiom to the effect that for all n, there exists an
n-Mahlo cardinal. Such cardinals extend far beyond the confines of, for
instance, Zermelo-Fraenkel set theory.
Recently Ressayre [32] (this volume) has obtained other examples
of finite combinatorial or quasi-combinatorial theorems which are
unprovable in, e.g., Zermelo-Fraenkel set theory. Ressayre's methods are
based on model-theoretic results concerning isomorphic initial segments
of models of set theory.


References.
----------

[1] V. M. Abrusci, Dilators, generalized Goodstein sequences,
independence results: a survey, 20 pages, this volume.

[2] V. M. Abrusci, J.-Y. Girard, and J. Van de Wiele, Some uses of
dilators in combinatorial problems I, 27 pages, this volume.

[3] V. Bergelson, Ergodic Ramsey theory, 26 pages, this volume.

[4] A. Blass, J. L. Hirst, and S. G. Simpson, Logical analysis of
some theorems of combinatorics and topological dynamics, 32 pages, this
volume.

[5] S. Brackin, A summary of "On Ramsey-type theorems and their
provability in weak formal systems", 10 pages, this volume.

1
[6] W. Buchholz, An independence result for (P -CA) + BI, Annals of
1
Pure and Applied Logic, to appear.

[7] W. Buchholz and S. Wainer, Provably computable funcions and the
fast-growing hierarchy, 20 pages, this volume.

[8] T. J. Carlson and S. G. Simpson, A dual form of Ramsey's Theorem,
Advances in Mathematics 53, 1984, pp. 265-290.

[9] T. J. Carlson and S. G. Simpson, Topological Ramsey Theory, in:
Mathematics of Ramsey Theory (a special volume of Annals of Discrete
v ' "
Mathematics), edited by J. Nesetril and V. Rodl, to appear.

[10] E. A. Cichon, A short proof of two recently discovered
independence results using recursion theoretic methods, Proceedings of
the American Mathematical Society 87, 1983, pp. 704-706.

[12] F. van Engelen, A. W. Miller, and J. Steel, Rigid Borel sets and
better quasiorder theory, 27 pages, this volume.

[13] H. Friedman, Necessary uses of abstract set theory in finite
mathematics, Advances in Math., to appear.

[14] H. Friedman, Update on concrete independence, February l986, 6
pages.

[15] H. Friedman, K. McAloon, and S. G. Simpson, A finite
combinatorial principle which is equivalent to the 1-consistency of
predicative analysis, in: Logic Symposium I (Patras 1980), edited by G.
Metakides, North-Holland, 1982, pp. 197-220.

[16] H. Furstenberg, Recurrence in Ergodic Theory and Combinatorial
Number Theory, Princeton University Press, 1981, vii + 202 pages.

[17] F. Galvin and K. Prikry, Borel sets and Ramsey's Theorem,
Journal of Symbolic Logic 38, 1973, pp. 193-198.

[18] R. L. Goodstein, On the restricted ordinal theorem, Journal of
Symbolic Logic 9, 1944, pp. 33-41.

[19] R. L. Graham, B. L. Rothschild, and J. H. Spencer, Ramsey
Theory, Wiley, 1980, ix + 174 pages.

"
[20] A. Kanamori and K. McAloon, On Godel incompleteness and finite
combinatorics, Annals of Pure and Applied Logic, to appear.

[21] J. Ketonen and R. Solovay, Rapidly growing Ramsey functions,
Annals of Mathematics 113, 1981, pp. 267-314.

[22] L. Kirby and J. Paris, Accessible independence results for
Peano arithmetic, Bulletin of the London Math. Soc. 14, 1982, pp.
285-293.

" "
[23] D. Konig, Uber eine Schlussweise aus dem Endlichen ins
Unendliche, Acta Litterarum ac Scientarum (Ser. Sci. Math.) Szeged 3,
1927, pp. 121-130.

[24] J. Kruskal, Well-quasi-ordering, the tree theorem, and
'
Vazsonyi's conjecture, Transactions of the Amer. Math. Soc. 95, 1960, pp.
210-225.

"
[25] R. Laver, On Fraisse's order type conjecture, Annals of
Math. 93, l971, pp. 89-111.

[26] M. Loebl and J. Matousek, On undecidability of the weakened
Kruskal theorem, 6 pages, this volume.

v v
[27] J. Nesetril and R. Thomas, Well quasiorderings, long games, and
a combinatorial study of undecidability, 13 pages, this volume.

[28] M. Okada and G. Takeuti, On the theory of quasi ordinal
diagrams, 14 pages, this volume.

[29] J. Paris, Some independence results for Peano arithmetic,
Journal of Symbolic Logic 43, 1978, pp. 725-731.

[30] J. Paris and L. Harrington, A mathematical incompleteness in
Peano arithmetic, in: Handbook of Mathematical Logic, edited by J.
Barwise, North-Holland, 1977, pp. 1133-1142.

[31] F. P. Ramsey, On a problem of formal logic, Proc. London Math.
Soc. 30, 1930, pp. 264-286.

[32] J.-P. Ressayre, Nonstandard universes with strong embeddings,
and their finite approximations, 20 pages, this volume.

[33] N. Robertson and P. D. Seymour, Graph minors III: Planar
tree-width, Journal of Combinatorial Theory B 36, 1984, pp. 49-64.

[34] N. Robertson and P. D. Seymour, Graph minors VII: a Kuratowski
theorem for general surfaces, preprint, 38 pages.

"
[35] K. Schutte and S. G. Simpson, Ein in der reinen Zahlentheorie
" "
unbeweisbarer Satz uber endliche folgen von naturlichen Zahlen, Archiv
"
fur math. Logik und Grundlagenforschung 25, 1985, pp. 75-89.

"
[36] S. G. Simpson, BQO theory and Fraisse's conjecture, Chapter 9
of: Recursive Aspects of Descriptive Set Theory, by R. Mansfield and G.
Weitkamp, Oxford University Press, 1985, pp. 124-138.

[37] S. G. Simpson, Nonprovability of certain combinatorial
properties of finite trees, in: Harvey Friedman's Research in the
Foundations of Mathematics, edited by L. Harrington, M. Morley, A.
Scedrov, and S. G. Simpson, North-Holland, l985, pp. 87-117.

[38] S. G. Simpson, Partial realizations of Hilbert's Program,
preprint, 1986, 21 pages.

[39] R. L. Smith, The consistency strength of some finite forms of
the Higman and Kruskal theorems, in: Harvey Friedman's Research in the
Foundations of Mathematics, edited by L. Harrington, M. Morley, A.
Scedrov, and S. G. Simpson, North-Holland, l985, pp. 87-117.

[40] G. Takeuti, Ordinal diagrams, J. Math. Soc. Japan 9, 1957, pp.
386-394; 12, 1960, pp. 385-391.

[41] G. Takeuti, Proof Theory (second edition), North-Holland, 1986,
to appear.


Department of Mathematics
Pennsylvania State University
University Park, PA 16802


Note added September 1, 1986. In a remarkable last-minute
---- ----- --------- -- ----

contribution to this volume, Friedman, Robertson and Seymour have shown


that the Robertson-Seymour Theorem 4.6 (well quasiorderedness of finite


graphs under minor embeddability) finitistically implies Friedman's


Theorem 4.7 (well quasiorderedness of finite labeled trees under gap


embeddability). It follows that the Robertson-Seymour Theorem, as well as


finite miniaturizations of it, are not provable in the strong system

1
P -CA . Thus the suspicions which were expressed above in connection with
1 0

Theorems 4.6 and 4.7 are fully vindicated.

Bill Dubuque

unread,
Jun 10, 1996, 3:00:00 AM6/10/96
to

pa...@mtnmath.com (Paul Budnik) wrote to sci.physics, sci.math on 5/27/96:
:
:What is central to mathematics is that certain propositions

:are deducible from certain formal systems. Without that formal
:mathematics is impossible. With it you have the objective truth
:of at least finite mathematics. ...
:
:There is general agreement that objective mathematical truth exists.
:The vast majority of mathematicians will agree
:on at least the objective truth of finite mathematics.

The notion of "finite mathematics" is problematic since by Goedel's
independence results there is no nontrivial self-contained finite
subsystem. Thus there will be simple statements in any nontrivial
finite system whose proof is independent of the system -- which is
surely problematic for your notion of truth. Note also that such
independent statements are not necessarily contrived statements
without mathematical interest. There are now well-known "natural"
independence results. For details, I highly recommend reading the
expository paper of Steven Simpson (see below). Also see my prior
posts on Goedel's theorem, Goodstein's theorem, and natural
independence results, which can be located on the newsgroup archive
at http://www.dejanews.com/.

-Bill

Below is the first section of said paper of Simpson.
The full paper can be found at

swiss-ftp.ai.mit.edu:/incoming/unprovable.txt.

This is an asciized version from Simpson's original at

ftp.math.psu.edu:/pub/simpson

Stephen G. Simpson


#0. Introduction.
------------

...

Paul Budnik

unread,
Jun 10, 1996, 3:00:00 AM6/10/96
to

Bill Dubuque (w...@zurich.ai.mit.edu) wrote:
: pa...@mtnmath.com (Paul Budnik) wrote to sci.physics, sci.math on 5/27/96:

: :
: :What is central to mathematics is that certain propositions
: :are deducible from certain formal systems. Without that formal
: :mathematics is impossible. With it you have the objective truth
: :of at least finite mathematics. ...
: :
: :There is general agreement that objective mathematical truth exists.
: :The vast majority of mathematicians will agree
: :on at least the objective truth of finite mathematics.

: The notion of "finite mathematics" is problematic since by Goedel's
: independence results there is no nontrivial self-contained finite
: subsystem. Thus there will be simple statements in any nontrivial
: finite system whose proof is independent of the system -- which is

: surely problematic for your notion of truth. [...]

I disagree. Another way to state the above is that the halting problem is not
recursively solvable. That is no formal system (finitistic or otherwise)
can correctly solve the halting problem. However we can recursively
enumerate the Godel numbers of all TMs that do halt. This is the clearest
example one can imagine of obviously objective truth that is not decidable.

I would agree that there is no clear boundary between finite and infinite
mathematics. I suspect that all meaningful mathematics can be formulated
as finite mathematics and truth is objective for all this mathematics.
--
pa...@mtnmath.com, http://www.mtnmath.com

Matthew P Wiener

unread,
Jun 12, 1996, 3:00:00 AM6/12/96
to

In article <WGD.96Ju...@berne.ai.mit.edu>, wgd@zurich (Bill Dubuque) writes:
>The notion of "finite mathematics" is problematic since by Goedel's
>independence results there is no nontrivial self-contained finite
>subsystem.

(A): that is not what Goedel proved.

(B): that does not make for "finite mathematics is problematic",
unless you have a rather restricted philosophy in the first place.

> Thus there will be simple statements in any nontrivial
>finite system whose proof is independent of the system -- which is
>surely problematic for your notion of truth.

Numerous counterexamples exist. For example, the first order theory
of the complex numbers is complete.

Bill Dubuque

unread,
Jun 16, 1996, 3:00:00 AM6/16/96
to

:From: pa...@mtnmath.com (Paul Budnik)
:Date: 10 Jun 1996 21:42:44 -0700
:
:I would agree that there is no clear boundary between finite and infinite

:mathematics. I suspect that all meaningful mathematics can be formulated
:as finite mathematics and truth is objective for all this mathematics.

I think you misinterpreted my earlier message. As far as
provability goes, there is indeed a widely accepted notion of
"finite combinatorial proof", namely proofs in the system PA of
first order Peano arithmetic (or equivalently in
Zermelo--Fraenkel set theory with the Axiom of Infinity replaced
by the Axiom of Finity, i.e. "All sets are finite"). It is in
such a system that one would formalize "finite mathematics".

But as Simpson explains in the exposition contained in my prior
post, there are simple independent statements even in these
finite systems. This poses serious difficulties for naive notions
of truth. How do you propose to uniformally assign truth values
to these independent statements? Similar issues were widely
discussed when Cohen proved the independence of the continuum
hypothesis (and even earlier by Skolem who stressed the
"relativity" of set theory). The resulting foundational bifurcations
are analogous to the bifurcation in geometry when Non-Euclidean
geometries were discovered (indeed, some expositions refer to the
ramifications of Cohen's work as "Non-Cantorian set theory").

Modern set-theory decides these independent statements by
searching for good axioms which posit the existence of
higher and higher infinities (large cardinal axioms, etc,
for a good exposition see Kanamori's "The mathematical
development of set theory from Cantor to Cohen", Bulletin
of Symbolic Logic, 2 #1 1996 1-71). Goedel believed that
the human mind -- through its intuition -- is capable of
uniformly constructing such higher infinities and this is
the basis for Goedel's argument against "mind = machine".
See recent papers of Wilfried Sieg for very readable
expositions of this and related points.

-Bill

Paul Budnik

unread,
Jun 17, 1996, 3:00:00 AM6/17/96
to

Bill Dubuque (w...@zurich.ai.mit.edu) wrote:
: :From: pa...@mtnmath.com (Paul Budnik)

: :Date: 10 Jun 1996 21:42:44 -0700
: :
: :I would agree that there is no clear boundary between finite and infinite
: :mathematics. I suspect that all meaningful mathematics can be formulated
: :as finite mathematics and truth is objective for all this mathematics.

: I think you misinterpreted my earlier message. As far as
: provability goes, there is indeed a widely accepted notion of
: "finite combinatorial proof", namely proofs in the system PA of
: first order Peano arithmetic (or equivalently in
: Zermelo--Fraenkel set theory with the Axiom of Infinity replaced
: by the Axiom of Finity, i.e. "All sets are finite"). It is in
: such a system that one would formalize "finite mathematics".

: But as Simpson explains in the exposition contained in my prior
: post, there are simple independent statements even in these
: finite systems. This poses serious difficulties for naive notions
: of truth. How do you propose to uniformally assign truth values
: to these independent statements? Similar issues were widely
: discussed when Cohen proved the independence of the continuum
: hypothesis (and even earlier by Skolem who stressed the
: "relativity" of set theory). The resulting foundational bifurcations
: are analogous to the bifurcation in geometry when Non-Euclidean
: geometries were discovered (indeed, some expositions refer to the
: ramifications of Cohen's work as "Non-Cantorian set theory").

I think you misunderstand my point. It is quite clear than no
axiomatic system can decide truth even finitistic truth. In particular
it cannot decide the halting problem for all TMs that do not halt.

One cannot assign truth values to the statements by any definable process
finitistic or otherwise. I just do not see that as an obstacle
to absolute notions of truth. I do think the continuum hypothesis
is a statement that is neither true no false in any absolute sense
but only true, false or undecidable relative to a particular formal
system.

: Modern set-theory decides these independent statements by


: searching for good axioms which posit the existence of
: higher and higher infinities (large cardinal axioms, etc,
: for a good exposition see Kanamori's "The mathematical
: development of set theory from Cantor to Cohen", Bulletin
: of Symbolic Logic, 2 #1 1996 1-71). Goedel believed that
: the human mind -- through its intuition -- is capable of
: uniformly constructing such higher infinities and this is
: the basis for Goedel's argument against "mind = machine".
: See recent papers of Wilfried Sieg for very readable
: expositions of this and related points.

I am well aware of these issues and about 20 years ago had
discussion with a number of well known logicians including Paul
Cohen of Continuum Hypothesis fame. I tried to convince them that
one can regard all of mathematics as properties of TMs or recursive
processes and that mathematics (such as the continuum hypothesis which
cannot be formulated in this way) is not objectively true or false.

Searching for axioms about large cardinals will turn out to
be a very poor way to extend mathematics. It is not unlike metaphysical
solutions to problems in physics. It is very appealing and gives one
of sense of having special insight into profound truths but is ultimately
a sterile exercise. You have no idea which axioms are true just as
physicists who resort to metaphysical solutions have no idea what
they are talking about. You try to decide which axioms to accept
based one the power or esthetic appeal of the resulting theorems.
I think that is a back door and inefficient way to begin to
understand the underlying combinatorial structures.

It will be far more productive to gain an understanding of the
combinatorial structures that are implicit in the
axioms. We need to directly develop axioms about these structures.
This has the great advantage of being able to do computer experiments
to aid our intuition. Some day our entire approach to mathematics
will be different. Mathematical research will become absolutely
dependent on computer aids just as virtually every other science
is today. It is particularly ironic that mathematics
which is the ultimate source of the theory of computing is nearly
the last scientific discipline to make effective use of computers.

Matthew P Wiener

unread,
Jun 17, 1996, 3:00:00 AM6/17/96
to

In article <4q309l$o...@mtnmath.com>, paul@mtnmath (Paul Budnik) writes:

>Searching for axioms about large cardinals will turn out to be a very
>poor way to extend mathematics.

It has been rather successful so far.

> It is not unlike metaphysical
>solutions to problems in physics. It is very appealing and gives one
>of sense of having special insight into profound truths but is
>ultimately a sterile exercise.

What has been seen so far has been extremely fruitful.

> You have no idea which axioms are true
>just as physicists who resort to metaphysical solutions have no idea
>what they are talking about.

Sure we have an idea. Countably many Woodin cardinals exist. How is that?

> You try to decide which axioms to accept
>based one the power or esthetic appeal of the resulting theorems. I
>think that is a back door and inefficient way to begin to understand
>the underlying combinatorial structures.

And what understanding have you achieved? Zero, apparently. Enjoy
your self-proclaimed front door.

>It will be far more productive to gain an understanding of the
>combinatorial structures that are implicit in the axioms.

That _is_ what is being done.

> We need to
>directly develop axioms about these structures. This has the great
>advantage of being able to do computer experiments to aid our
>intuition.

What utter drivel.

> Some day our entire approach to mathematics will be
>different. Mathematical research will become absolutely dependent on

>computer aids just as virtually every other science is today. [...]

Sounds boring.

Klaus Kassner

unread,
Jun 17, 1996, 3:00:00 AM6/17/96
to

Bill Dubuque wrote:
>> finite systems. This poses serious difficulties for naive notions
> of truth. How do you propose to uniformally assign truth values
> to these independent statements? Similar issues were widely
> discussed when Cohen proved the independence of the continuum
> hypothesis (and even earlier by Skolem who stressed the
> "relativity" of set theory). The resulting foundational bifurcations
> are analogous to the bifurcation in geometry when Non-Euclidean
> geometries were discovered (indeed, some expositions refer to the
> ramifications of Cohen's work as "Non-Cantorian set theory").

Where is this proof published? I knew that it exists but never knew
who proved it and what kind of alternative arithmetics would arise
from it.

Paul Budnik

unread,
Jun 17, 1996, 3:00:00 AM6/17/96
to

Matthew P Wiener (wee...@sagi.wistar.upenn.edu) wrote:
: In article <4q309l$o...@mtnmath.com>, paul@mtnmath (Paul Budnik) writes:

: >Searching for axioms about large cardinals will turn out to be a very


: >poor way to extend mathematics.

: It has been rather successful so far.

What are the success? There is no general agreement that ZF is
true let alone stronger systems.

The foundations of mathematics has been stagnant compared
to many other scientific fields. This is not
just an academic question. How big is the demand for people
who know about measurable cardinals versus those that understand
genetic engineering, MPEG video compression or C++. If you think
either of the latter two are trivial take a look at the MPEG specifications
or or the C++ draft specifications.

What I am afraid has happened is that results in foundations
in mathematics for the most part have little meaning or relevance
to any but a handful of high priests. Few others understand them
and few others care about them.

Of course there is no formula to determine what is important
scientifically or mathematically. However, if a great deal of effort
by many brilliant people over an extended period of time leads to
little of relevance to those outside the field, it is reasonable to
presume the field has become ingrown and stagnant.


: > It is not unlike metaphysical


: >solutions to problems in physics. It is very appealing and gives one
: >of sense of having special insight into profound truths but is
: >ultimately a sterile exercise.

: What has been seen so far has been extremely fruitful.

: > You have no idea which axioms are true


: >just as physicists who resort to metaphysical solutions have no idea
: >what they are talking about.

: Sure we have an idea. Countably many Woodin cardinals exist. How is that?

Why not explain why that is true to the mathematically literate subset
of the audience that reads this group. See how many you can convince
or how convincing your arguments are.

: > You try to decide which axioms to accept


: >based one the power or esthetic appeal of the resulting theorems. I
: >think that is a back door and inefficient way to begin to understand
: >the underlying combinatorial structures.

: And what understanding have you achieved? Zero, apparently. Enjoy
: your self-proclaimed front door.

Of course it is easy to say `if you think there is a better way go out
and do it'. If it were that simple I would have. I did put significant
effort into such projects at one time. However is is not so easy for one
individual to compete with a large and talented community even if that
community is following a poor plan.

: >It will be far more productive to gain an understanding of the


: >combinatorial structures that are implicit in the axioms.

: That _is_ what is being done.

To some degree yes but indirectly and without making effective use
of computer technology.

: > We need to


: >directly develop axioms about these structures. This has the great
: >advantage of being able to do computer experiments to aid our
: >intuition.

: What utter drivel.

Your usual brilliant debating style is evident.

: > Some day our entire approach to mathematics will be


: >different. Mathematical research will become absolutely dependent on

: >computer aids just as virtually every other science is today. [...]

: Sounds boring.

It will be excruciatingly boring and excruciatingly difficult just as getting
a complex program to work often is. On the other hand it will achieve
enormously important results of great esthetic and great practical interest.

Ilias Kastanas

unread,
Jun 18, 1996, 3:00:00 AM6/18/96
to

In article <4q309l$o...@mtnmath.com>, Paul Budnik <pa...@mtnmath.com> wrote:
>Bill Dubuque (w...@zurich.ai.mit.edu) wrote:
>: :From: pa...@mtnmath.com (Paul Budnik)
>: :Date: 10 Jun 1996 21:42:44 -0700
>: :
>: of truth. How do you propose to uniformally assign truth values
>: to these independent statements? Similar issues were widely


They are statements of arithmetic; either An P(n), or En ~P(n)
in the integers, and one of the two is true -- whether a theory can
prove it or not.

>
>I think you misunderstand my point. It is quite clear than no
>axiomatic system can decide truth even finitistic truth. In particular
>it cannot decide the halting problem for all TMs that do not halt.
>
>One cannot assign truth values to the statements by any definable process
>finitistic or otherwise. I just do not see that as an obstacle


Not "otherwise"... There is no _algorithm_; one cannot claim more.


>to absolute notions of truth. I do think the continuum hypothesis
>is a statement that is neither true no false in any absolute sense
>but only true, false or undecidable relative to a particular formal
>system.
>
>: Modern set-theory decides these independent statements by
>: searching for good axioms which posit the existence of
>: higher and higher infinities (large cardinal axioms, etc,
>: for a good exposition see Kanamori's "The mathematical
>: development of set theory from Cantor to Cohen", Bulletin
>: of Symbolic Logic, 2 #1 1996 1-71). Goedel believed that
>: the human mind -- through its intuition -- is capable of
>: uniformly constructing such higher infinities and this is
>: the basis for Goedel's argument against "mind = machine".
>: See recent papers of Wilfried Sieg for very readable
>: expositions of this and related points.


Most "natural" undecidable statements do not need new axioms;
set theory, just a fragment, settles them.


>It will be far more productive to gain an understanding of the
>combinatorial structures that are implicit in the
>axioms. We need to directly develop axioms about these structures.
>This has the great advantage of being able to do computer experiments
>to aid our intuition. Some day our entire approach to mathematics
>will be different. Mathematical research will become absolutely
>dependent on computer aids just as virtually every other science
>is today. It is particularly ironic that mathematics
>which is the ultimate source of the theory of computing is nearly
>the last scientific discipline to make effective use of computers.


But... we agree that algorithms are insufficient. What would
we be looking for?

Ilias

Matthew P Wiener

unread,
Jun 19, 1996, 3:00:00 AM6/19/96
to

In article <4q5b2g$3...@mtnmath.com>, paul@mtnmath (Paul Budnik) writes:
>Matthew P Wiener (wee...@sagi.wistar.upenn.edu) wrote:
>: In article <4q309l$o...@mtnmath.com>, paul@mtnmath (Paul Budnik) writes:

>: >Searching for axioms about large cardinals will turn out to be a very


>: >poor way to extend mathematics.

>: It has been rather successful so far.
>
>What are the success?

Determinacy is the most spectacular, but the set theory books are
filled with them.

> There is no general agreement that ZF is
>true let alone stronger systems.

There is close enough to a general agreement.

>The foundations of mathematics has been stagnant compared to many
>other scientific fields. This is not just an academic question.
>How big is the demand for people who know about measurable cardinals
>versus those that understand genetic engineering, MPEG video
>compression or C++.

A pointless question, like most of your meanderings.

>: >You have no idea which axioms are true just as physicists who


>: >resort to metaphysical solutions have no idea what they are
>: >talking about.

>: Sure we have an idea. Countably many Woodin cardinals exist. How is that?

>Why not explain why that is true to the mathematically literate subset
>of the audience that reads this group. See how many you can convince
>or how convincing your arguments are.

Truth isn't an opinion poll.

>: > You try to decide which axioms to accept


>: >based one the power or esthetic appeal of the resulting theorems. I
>: >think that is a back door and inefficient way to begin to understand
>: >the underlying combinatorial structures.

>: And what understanding have you achieved? Zero, apparently. Enjoy
>: your self-proclaimed front door.

>Of course it is easy to say `if you think there is a better way go out
>and do it'. If it were that simple I would have.

You just _say_ it is that simple.

>: >It will be far more productive to gain an understanding of the


>: >combinatorial structures that are implicit in the axioms.

>: That _is_ what is being done.

>To some degree yes but indirectly and without making effective use
>of computer technology.

The only computer technology needed is a good editor and TeX.

>: > We need to


>: >directly develop axioms about these structures. This has the great
>: >advantage of being able to do computer experiments to aid our
>: >intuition.

>: What utter drivel.

>Your usual brilliant debating style is evident.

It gets to the point.

>: > Some day our entire approach to mathematics will be


>: >different. Mathematical research will become absolutely dependent on

>: >computer aids just as virtually every other science is today. [...]

>: Sounds boring.

>It will be excruciatingly boring and excruciatingly difficult just as getting
>a complex program to work often is. On the other hand it will achieve
>enormously important results of great esthetic and great practical interest.

We have your word on this, right? Like, duh.

Paul Budnik

unread,
Jun 20, 1996, 3:00:00 AM6/20/96
to

Matthew P Wiener (wee...@sagi.wistar.upenn.edu) wrote:
: In article <4q5b2g$3...@mtnmath.com>, paul@mtnmath (Paul Budnik) writes:
: >Matthew P Wiener (wee...@sagi.wistar.upenn.edu) wrote:
: >: In article <4q309l$o...@mtnmath.com>, paul@mtnmath (Paul Budnik) writes:

: >: >Searching for axioms about large cardinals will turn out to be a very


: >: >poor way to extend mathematics.

: >: It has been rather successful so far.
: >
: >What are the success?

: Determinacy is the most spectacular, but the set theory books are
: filled with them.

Yes of course they are but is anyone other than logicians
interested? That is have their been results that over time have
had wide spread practical results. I do not think that foundations
research should be measured directly by practical outcomes but I
think their lack over an extended period of time is reason to doubt
the approach being taken.

: > There is no general agreement that ZF is


: >true let alone stronger systems.

: There is close enough to a general agreement.

Close enough for what? I expect most mathematicians think ZF is
consistent but *true*. What does it mean to say ZF is true?

: >The foundations of mathematics has been stagnant compared to many


: >other scientific fields. This is not just an academic question.
: >How big is the demand for people who know about measurable cardinals
: >versus those that understand genetic engineering, MPEG video
: >compression or C++.

: A pointless question, like most of your meanderings.

It is relevant to anyone not independently wealthy who is deciding
on a career. In turn that makes it relevant to people in the field
who want it to grow and prosper.

: >: >You have no idea which axioms are true just as physicists who


: >: >resort to metaphysical solutions have no idea what they are
: >: >talking about.

: >: Sure we have an idea. Countably many Woodin cardinals exist. How is that?

: >Why not explain why that is true to the mathematically literate subset
: >of the audience that reads this group. See how many you can convince
: >or how convincing your arguments are.

: Truth isn't an opinion poll.

That is precisely the problem. That a narrow clique of logicians
think something is relevant does not make it so. You will get wide
spread agreement on the value (if not the truth) of new axioms when
those axioms are widely used by others for important problems.

[...]
: >: >It will be far more productive to gain an understanding of the


: >: >combinatorial structures that are implicit in the axioms.

: >: That _is_ what is being done.

: >To some degree yes but indirectly and without making effective use
: >of computer technology.

: The only computer technology needed is a good editor and TeX.

Yes the profound insights of mathematicians need no computer aids
to deal with combinatorially complex structures (unlike every other
field of science and engineering). Rather than mess around with the
gory details (which do require automation and computers)
they prefer to meditate about large cardinals because it provides them
a special metaphysical insight into the nature of combinatorics.

Of course in the mean time all but a few find it hard to make a living
in the field. That is not their fault or the fault of their profession. It is
only a reflection of the narrow mindedness of the wider culture.

[...]
: >It will be excruciatingly boring and excruciatingly difficult just as getting


: >a complex program to work often is. On the other hand it will achieve
: >enormously important results of great esthetic and great practical interest.

: We have your word on this, right? Like, duh.

You have the evidence of history. Properly applied computer technology
can provide an enormously powerful aid to human intellect and imagination.
We agree that understanding combinatorics is an important element of logic.
Computers allow one to do enormously complex experiments to
understand combinatorial structures. How could this not
be an enormous aid? Only by maintaining arrogant fantasies about
mystical intuition that comes from contemplating large cardinals!

Matthew P Wiener

unread,
Jun 20, 1996, 3:00:00 AM6/20/96
to

In article <4qbvn8$a...@mtnmath.com>, paul@mtnmath (Paul Budnik) writes:
>Matthew P Wiener (wee...@sagi.wistar.upenn.edu) wrote:
>: In article <4q5b2g$3...@mtnmath.com>, paul@mtnmath (Paul Budnik) writes:

>: >: It has been rather successful so far.

>: >What are the success?

>: Determinacy is the most spectacular, but the set theory books are
>: filled with them.

>Yes of course they are but is anyone other than logicians interested?

I have no idea. That is their business.

>That is have their been results that over time have had wide spread
>practical results. I do not think that foundations research should be
>measured directly by practical outcomes but I think their lack over
>an extended period of time is reason to doubt the approach being
>taken.

There has been no lack. Come up with an actual reason.

>: > There is no general agreement that ZF is
>: >true let alone stronger systems.

>: There is close enough to a general agreement.

>Close enough for what?

For a general agreement.

> I expect most mathematicians think ZF is
>consistent but *true*. What does it mean to say ZF is true?

That it is true.

>: >The foundations of mathematics has been stagnant compared to many
>: >other scientific fields. This is not just an academic question.
>: >How big is the demand for people who know about measurable cardinals
>: >versus those that understand genetic engineering, MPEG video
>: >compression or C++.

>: A pointless question, like most of your meanderings.

>It is relevant to anyone not independently wealthy who is deciding
>on a career. In turn that makes it relevant to people in the field
>who want it to grow and prosper.

Gibberish defending gibberish? As you wish.

>: >: >You have no idea which axioms are true just as physicists who
>: >: >resort to metaphysical solutions have no idea what they are
>: >: >talking about.

>: >: Sure we have an idea. Countably many Woodin cardinals exist.

>: >Why not explain why that is true to the mathematically literate subset


>: >of the audience that reads this group. See how many you can convince
>: >or how convincing your arguments are.

>: Truth isn't an opinion poll.

>That is precisely the problem. That a narrow clique of logicians
>think something is relevant does not make it so.

That is correct. That it *is* relevant is what makes it so.

> You will get wide
>spread agreement on the value (if not the truth) of new axioms when
>those axioms are widely used by others for important problems.

No problem. Meanwhile, it is true. You claimed we have no idea, and
you were wrong. Ranting on about how many people have actually studied
the question is irrelevant, no matter how happy it makes you.

>: >: >It will be far more productive to gain an understanding of the
>: >: >combinatorial structures that are implicit in the axioms.

>: >: That _is_ what is being done.

>: >To some degree yes but indirectly and without making effective use
>: >of computer technology.

>: The only computer technology needed is a good editor and TeX.

>Yes the profound insights of mathematicians need no computer aids
>to deal with combinatorially complex structures (unlike every other
>field of science and engineering). Rather than mess around with the
>gory details (which do require automation and computers)
>they prefer to meditate about large cardinals because it provides them
>a special metaphysical insight into the nature of combinatorics.

You can gibber all you like, but it proves nothing.

>Of course in the mean time all but a few find it hard to make a
>living in the field. That is not their fault or the fault of their
>profession. It is only a reflection of the narrow mindedness of the
>wider culture.

So it goes in the world. Do you have a point, or are you just weak in
the head?

>: >It will be excruciatingly boring and excruciatingly difficult just
>: >as getting a complex program to work often is. On the other hand
>: >it will achieve enormously important results of great esthetic and
>: >great practical interest.

>: We have your word on this, right? Like, duh.

>You have the evidence of history. [...]

No we don't.

Paul Budnik

unread,
Jun 21, 1996, 3:00:00 AM6/21/96
to

Matthew P Wiener (wee...@sagi.wistar.upenn.edu) wrote:
: In article <4qbvn8$a...@mtnmath.com>, paul@mtnmath (Paul Budnik) writes:
: >: Determinacy is the most spectacular, but the set theory books are
: >: filled with them.

: >Yes of course they are but is anyone other than logicians interested?

: I have no idea. That is their business.

In other words you could care less if anyone but logicians see
any value in this work.

: >That is have their been results that over time have had wide spread


: >practical results. I do not think that foundations research should be
: >measured directly by practical outcomes but I think their lack over
: >an extended period of time is reason to doubt the approach being
: >taken.

: There has been no lack. Come up with an actual reason.

That other logicians use these results is not what I consider
practical outcomes.

[...]
: > I expect most mathematicians think ZF is


: >consistent but *true*. What does it mean to say ZF is true?

: That it is true.

Does this mean that all the objects it postulates exist in some Platonic
heaven or only that you are an arrogant SOB who sees no need to explain
your profound insights to lesser beings?

[...]
: >: Truth isn't an opinion poll.

: >That is precisely the problem. That a narrow clique of logicians
: >think something is relevant does not make it so.

: That is correct. That it *is* relevant is what makes it so.

: > You will get wide
: >spread agreement on the value (if not the truth) of new axioms when
: >those axioms are widely used by others for important problems.

: No problem. Meanwhile, it is true. You claimed we have no idea, and
: you were wrong. Ranting on about how many people have actually studied

: the question is irrelevant, no matter how happy it makes you. [...]

Recent (last 30 years or so) results in logic are widely used by other
logicians. I am not aware of much use outside of logic or for that matter
much confidence in results that go beyond ZF. If I am mistaken you or
somebody else ought to be able to provide some examples.

It does not matter how many people work on developing a new technique
for genetic engineering. What is important is the widespread recognition
of the practical value of that technique. If very bright people work
in an area over an extended period of time and come up with nothing
that has recognized value outside the group then they are probably
trapped in a stagnant and ingrown field.

Toby Bartels

unread,
Jun 22, 1996, 3:00:00 AM6/22/96
to

Ilias Kastanas <ika...@alumnae.caltech.edu> wrote in part:

>Bill Dubuque <w...@zurich.ai.mit.edu> wrote:

>>How do you propose to uniformally assign truth values
>>to these independent statements?

>They are statements of arithmetic; either An P(n), or En ~P(n)


>in the integers, and one of the two is true -- whether a theory can
>prove it or not.

What does it mean mathematically to say such a statement is true?


-- Toby Bartels
to...@ugcs.caltech.edu

Toby Bartels

unread,
Jun 22, 1996, 3:00:00 AM6/22/96
to

Matthew P Wiener, whose arguments are worthless, wrote:

>Paul Budnik, whose arguments may have worth, wrote:

(and they alternate)

>>That is have their been results that over time have had wide spread
>>practical results. I do not think that foundations research should be
>>measured directly by practical outcomes but I think their lack over
>>an extended period of time is reason to doubt the approach being
>>taken.

>There has been no lack.

An argument worth anything would include an example.

>>>> There is no general agreement that ZF is
>>>>true let alone stronger systems.

>>>There is close enough to a general agreement.

>>Close enough for what?

>For a general agreement.

An argument worth anything would know the difference between `to' and `for'.

>> I expect most mathematicians think ZF is
>>consistent but *true*. What does it mean to say ZF is true?

>That it is true.

An argument worth anything would answer the question.

>>>>The foundations of mathematics has been stagnant compared to many
>>>>other scientific fields. This is not just an academic question.
>>>>How big is the demand for people who know about measurable cardinals
>>>>versus those that understand genetic engineering, MPEG video
>>>>compression or C++.

>>>A pointless question, like most of your meanderings.

>>It is relevant to anyone not independently wealthy who is deciding
>>on a career. In turn that makes it relevant to people in the field
>>who want it to grow and prosper.

>Gibberish defending gibberish? As you wish.

An argument worth anything would distinguish gibberish from irrelevance.

>>>>It will be excruciatingly boring and excruciatingly difficult just
>>>>as getting a complex program to work often is. On the other hand
>>>>it will achieve enormously important results of great esthetic and
>>>>great practical interest.

>>>We have your word on this, right? Like, duh.

>>You have the evidence of history. [...]

>No we don't.

An argument worth anything would explain how Paul's analysis fails.


-- Toby
to...@ugcs.caltech.edu
just doing my good dead for the day

Toby Bartels

unread,
Jun 22, 1996, 3:00:00 AM6/22/96
to

I <to...@ugcs.caltech.edu> wrote in the end:

>just doing my good dead for the day

or deed


-- Toby
to...@ugcs.caltech.edu

Ilias Kastanas

unread,
Jun 22, 1996, 3:00:00 AM6/22/96
to

In article <4qfpkj$n...@gap.cco.caltech.edu>,

Toby Bartels <to...@ugcs.caltech.edu> wrote:
>Ilias Kastanas <ika...@alumnae.caltech.edu> wrote in part:
>
>>Bill Dubuque <w...@zurich.ai.mit.edu> wrote:
>
>>>How do you propose to uniformally assign truth values
>>>to these independent statements?
>
>>They are statements of arithmetic; either An P(n), or En ~P(n)
>>in the integers, and one of the two is true -- whether a theory can
>>prove it or not.
>
>What does it mean mathematically to say such a statement is true?


I'll hazard a guess that you know what it means.
P might as well be q != r, q and r polynomials with coefficients
in omega. Either some n makes them =, or none does.

Ilias

Toby Bartels

unread,
Jun 22, 1996, 3:00:00 AM6/22/96
to

Ilias Kastanas <ika...@alumnae.caltech.edu> wrote:

>Toby Bartels <to...@ugcs.caltech.edu> wrote:

>>Ilias Kastanas <ika...@alumnae.caltech.edu> wrote in part:

>>>They are statements of arithmetic; either An P(n), or En ~P(n)


>>>in the integers, and one of the two is true -- whether a theory can
>>>prove it or not.

>>What does it mean mathematically to say such a statement is true?

>I'll hazard a guess that you know what it means.

I seriously don't!
My best guess is that it means the statement holds in the standard model of N.
If my guess is right, my next question is whether this is common terminology.

I have only seen truth used in mathematics in three kinds of situations.
One is where a certain formal system is implied by the context,
in which case a true proposition is a theorem in that system
(and falsehood may not be a strict opposite of truth).
Another is similar, where a certain model is implied by the context.
The last is in reference to an ideal Platonic truth,
and this is what much of the thread seems to have been about.
The other concepts of truth aren't absolute.

If you meant truth relative to the standard model of N
(which could be implied by context recently),
then I know what you mean. Otherwise I don't.

>P might as well be q != r, q and r polynomials with coefficients
>in omega. Either some n makes them =, or none does.


-- Toby
to...@ugcs.caltech.edu

Ilias Kastanas

unread,
Jun 22, 1996, 3:00:00 AM6/22/96
to

In article <4qhq31$h...@gap.cco.caltech.edu>,

Toby Bartels <to...@ugcs.caltech.edu> wrote:
>Ilias Kastanas <ika...@alumnae.caltech.edu> wrote:
>
>>Toby Bartels <to...@ugcs.caltech.edu> wrote:
>
>>>Ilias Kastanas <ika...@alumnae.caltech.edu> wrote in part:
>
>>>>They are statements of arithmetic; either An P(n), or En ~P(n)
>>>>in the integers, and one of the two is true -- whether a theory can
>>>>prove it or not.
>
>>>What does it mean mathematically to say such a statement is true?
>
>>I'll hazard a guess that you know what it means.
>
>I seriously don't!
>My best guess is that it means the statement holds in the standard model of N.


Right. That is what I meant by "in the integers".

>If my guess is right, my next question is whether this is common terminology.


Pretty much -- in this setting, due to the nature of N.

>I have only seen truth used in mathematics in three kinds of situations.
>One is where a certain formal system is implied by the context,
>in which case a true proposition is a theorem in that system
>(and falsehood may not be a strict opposite of truth).


This helps confuse truth and provability, but practically is hard
to avoid. Anyway, the theorems are "true in all models" of the formal
system.

>Another is similar, where a certain model is implied by the context.


And this is what I interjected in a discussion about theories.

>The last is in reference to an ideal Platonic truth,
>and this is what much of the thread seems to have been about.


Well, it takes two Platonic seconds to realize that any Platonic
truth has to be by way of a model. It surely won't come as a formal
theory!

>The other concepts of truth aren't absolute.


Touching another thread -- what is relative about truth in N?
Will omega, + or * change?

>If you meant truth relative to the standard model of N
>(which could be implied by context recently),
>then I know what you mean. Otherwise I don't.


Chalk another one up for N!

Ilias

Matthew P Wiener

unread,
Jun 23, 1996, 3:00:00 AM6/23/96
to

In article <4qeie1$d...@mtnmath.com>, paul@mtnmath (Paul Budnik) writes:
>Matthew P Wiener (wee...@sagi.wistar.upenn.edu) wrote:
>: In article <4qbvn8$a...@mtnmath.com>, paul@mtnmath (Paul Budnik) writes:

>: >: Determinacy is the most spectacular, but the set theory books are
>: >: filled with them.

>: >Yes of course they are but is anyone other than logicians interested?

>: I have no idea. That is their business.

>In other words you could care less if anyone but logicians see


>any value in this work.

No, not in other words. What people like is up to them.

>: >That is have their been results that over time have had wide spread


>: >practical results. I do not think that foundations research should be
>: >measured directly by practical outcomes but I think their lack over
>: >an extended period of time is reason to doubt the approach being
>: >taken.

>: There has been no lack. Come up with an actual reason.

>That other logicians use these results is not what I consider
>practical outcomes.

Nor do I. As I said, there has been no lack.

>: > I expect most mathematicians think ZF is


>: >consistent but *true*. What does it mean to say ZF is true?

>: That it is true.

>Does this mean that all the objects it postulates exist in some Platonic


>heaven or only that you are an arrogant SOB who sees no need to explain
>your profound insights to lesser beings?

How would I know?

>: >: Truth isn't an opinion poll.

>: >That is precisely the problem. That a narrow clique of logicians
>: >think something is relevant does not make it so.

>: That is correct. That it *is* relevant is what makes it so.

>: > You will get wide
>: >spread agreement on the value (if not the truth) of new axioms when
>: >those axioms are widely used by others for important problems.

>: No problem. Meanwhile, it is true. You claimed we have no idea, and
>: you were wrong. Ranting on about how many people have actually studied

>: the question is irrelevant, no matter how happy it makes you. [...]

>Recent (last 30 years or so) results in logic are widely used by other
>logicians. I am not aware of much use outside of logic or for that matter
>much confidence in results that go beyond ZF. If I am mistaken you or
>somebody else ought to be able to provide some examples.

I mentioned determinacy.

>It does not matter how many people work on developing a new technique
>for genetic engineering. What is important is the widespread recognition
>of the practical value of that technique. If very bright people work
>in an area over an extended period of time and come up with nothing
>that has recognized value outside the group then they are probably
>trapped in a stagnant and ingrown field.

This analogy is perhaps supposed to mean something, but I have no idea
of what.

Matthew P Wiener

unread,
Jun 23, 1996, 3:00:00 AM6/23/96
to

In article <4qgdne$r...@gap.cco.caltech.edu>, toby@ugcs (Toby Bartels),

who is a retard and a liar, wrote:

>Matthew P Wiener, whose arguments are worthless, wrote:

>>Paul Budnik, whose arguments may have worth, wrote:

>>>That is have their been results that over time have had wide spread
>>>practical results. I do not think that foundations research should be
>>>measured directly by practical outcomes but I think their lack over
>>>an extended period of time is reason to doubt the approach being
>>>taken.

>>There has been no lack.

>An argument worth anything would include an example.

Why? Budnik is boring, just repeating his same old lies over and over
again.

>>>>> There is no general agreement that ZF is
>>>>>true let alone stronger systems.

>>>>There is close enough to a general agreement.

>>>Close enough for what?

>>For a general agreement.

>An argument worth anything would know the difference between `to' and `for'.

Right.

>>> I expect most mathematicians think ZF is
>>>consistent but *true*. What does it mean to say ZF is true?

>>That it is true.

>An argument worth anything would answer the question.

I did.

>>>>>The foundations of mathematics has been stagnant compared to many
>>>>>other scientific fields. This is not just an academic question.
>>>>>How big is the demand for people who know about measurable cardinals
>>>>>versus those that understand genetic engineering, MPEG video
>>>>>compression or C++.

>>>>A pointless question, like most of your meanderings.

>>>It is relevant to anyone not independently wealthy who is deciding
>>>on a career. In turn that makes it relevant to people in the field
>>>who want it to grow and prosper.

>>Gibberish defending gibberish? As you wish.

>An argument worth anything would distinguish gibberish from irrelevance.

In Budnik's case, I do not care.

>>>>>It will be excruciatingly boring and excruciatingly difficult just
>>>>>as getting a complex program to work often is. On the other hand
>>>>>it will achieve enormously important results of great esthetic and
>>>>>great practical interest.

>>>>We have your word on this, right? Like, duh.

>>>You have the evidence of history. [...]

>>No we don't.

>An argument worth anything would explain how Paul's analysis fails.

Why bother? It's just his promise of some world to come, presented
as if it were foregone fact.

Mike Kent

unread,
Jun 23, 1996, 3:00:00 AM6/23/96
to

Probably getting myself into trouble but ...

As I recall from a long while back, a statement is true relative
to some theory if it cannot be falsified in any model of the
theory. This is NOT the same as being deducible from the axioms
of the theory.

The Goedel statement is true (if arithmetic is consistent),
though it is not deducible.

On the other hand, the parallel postulate is neither true nor
false relative to some "standard" axiomatization of geometry
which omits it (providing that Euclidean geometry is consistent),
since it is possible to construct a model of hyperbolic geometry
within Euclidean geometry (the notions of "line" and "plane"
are redefined in such a way that the redefined objects satisfy
all the axioms other than the parallel postulate, and that there
are infinitely many distinct "coplanar" "lines" through a point
not on a "line" and which do not intersect the "lline"). Thus,
IF there is a model of Euclidean geometry, there is ALSO a model
of non-Euclidean geometry and so there are models of "almost-
Euclidean" geometry with satisfy both the parallel postulate and
its negation.

--
mk...@acm.org

Benjamin J. Tilly

unread,
Jun 23, 1996, 3:00:00 AM6/23/96
to

In article <4qk42v$lep$2...@mhadf.production.compuserve.com>
Mike Kent <70530...@CompuServe.COM> writes:

> Probably getting myself into trouble but ...
>
> As I recall from a long while back, a statement is true relative
> to some theory if it cannot be falsified in any model of the
> theory. This is NOT the same as being deducible from the axioms
> of the theory.
>

I recalled the opposite. The standard ultrafilter construction shows
that if a statement is unable to be falsified in some theory, then the
theory must have a model in which the statement is true.

> The Goedel statement is true (if arithmetic is consistent),
> though it is not deducible.
>

Assuming that arithmetic is consistent, there is a model for arithmetic
in which the Goedel statement is false. (The length of the proof would,
of course, be longer than 1, 2, 3, 4, ... and would not be a "number"
in the model that we "want" to understand.)

> On the other hand, the parallel postulate is neither true nor
> false relative to some "standard" axiomatization of geometry
> which omits it (providing that Euclidean geometry is consistent),
> since it is possible to construct a model of hyperbolic geometry
> within Euclidean geometry (the notions of "line" and "plane"
> are redefined in such a way that the redefined objects satisfy
> all the axioms other than the parallel postulate, and that there
> are infinitely many distinct "coplanar" "lines" through a point
> not on a "line" and which do not intersect the "lline"). Thus,
> IF there is a model of Euclidean geometry, there is ALSO a model
> of non-Euclidean geometry and so there are models of "almost-
> Euclidean" geometry with satisfy both the parallel postulate and
> its negation.

If there is a model in which Goedel's statement is true, then there is
also one in which it is false.

We are, of course, more interested in the model(s) in which it is true.
:-)

Ben Tilly

Ilias Kastanas

unread,
Jun 24, 1996, 3:00:00 AM6/24/96
to

In article <4qk42v$lep$2...@mhadf.production.compuserve.com>,
Mike Kent <70530...@CompuServe.COM> wrote:

>Probably getting myself into trouble but ...
>
>As I recall from a long while back, a statement is true relative
>to some theory if it cannot be falsified in any model of the
>theory. This is NOT the same as being deducible from the axioms
>of the theory.

But it is! In a model, any statement either holds or fails. If
P cannot be falsified in any model of T, then it holds in all models
of T. By Goedel Completeness, it is therefore deducible from T.

>The Goedel statement is true (if arithmetic is consistent),
>though it is not deducible.

Goedel's G is actually _equivalent_ to the consistency of PA
(provably in a weak theory).

The standard model N is a model of PA + G. But there are nonstandard
models that are models of PA + ~G. The "proof of 0=1 from PA" is only
such in the sense of the model; it is encoded by a nonstandard integer,
and hence does not decode to a "real" proof.


Trouble? No. I won't turn you in to the Turing Police!


Ilias

Toby Bartels

unread,
Jun 24, 1996, 3:00:00 AM6/24/96
to

Ilias Kastanas <ika...@alumnae.caltech.edu> wrote:

>Toby Bartels <to...@ugcs.caltech.edu> wrote:

>>Ilias Kastanas <ika...@alumnae.caltech.edu> wrote once upon a time:

>>>They are statements of arithmetic; either An P(n), or En ~P(n)
>>>in the integers, and one of the two is true -- whether a theory can
>>>prove it or not.

>>My best guess is that it means the statement holds in the standard model of N.

>Right. That is what I meant by "in the integers".

>>Much of the thread seems to have been about an ideal Platonic truth.

>Well, it takes two Platonic seconds to realize that any Platonic
>truth has to be by way of a model. It surely won't come as a formal
>theory!

Platonists, at least to a formalist like myself,
seem to think there is a particular model which is true.
This makes their concept different from the other, because it's absolute.
You don't have to specify a model; it's given a priori.

>>The other concepts of truth aren't absolute.

>Touching another thread -- what is relative about truth in N?
>Will omega, + or * change?

(What other thread?)

The concept of truth in a model is relative,
because you have to specify the model.
Once you specify N, of course, then there is no problem.
But that's what I meant by `relative'.


Normally, I'd have no problem interpreting your statement
as being about the standard model of N.
It's just that much of this thread has been dealing with
the idea of an absolute truth,
one where you don't specify anything further.
I suspect the person who asked the question you answered
with the statement about truth (in the standard model of N)
was asking about absolute truth;
your answer to his question would be
`I specify a model; otherwise I have no truth.',
with which I think he would be happy.


-- Toby
to...@ugcs.caltech.edu

Matthew P Wiener

unread,
Jun 24, 1996, 3:00:00 AM6/24/96
to

In article <4qmjmj$l...@gap.cco.caltech.edu>, toby@ugcs (Toby Bartels) writes:
>Platonists, at least to a formalist like myself, seem to think there
>is a particular model which is true.

Eh? Set theorists, for example, are generally platonists and are all
acutely aware of the multiplicity of models.

Michael Weiss

unread,
Jun 24, 1996, 3:00:00 AM6/24/96
to

Now that this thread has settled down a bit, and no longer consists of
"sez you"'s stacked five levels deep, I will venture to toss in my
own plugged nickels into the debate.

The last time this topic erupted (in sci.logic, as I recall), Matt
Wiener and Mike Oliver outlined some of the recent results flowing
from the large cardinal axioms. (Relatively recent, that is--- last
couple of decades.) Penelope Maddy wrote an excellent survey articles
a couple of years back, in the Journal of Symbolic Logic.

This is what Wiener is talking about when he keeps saying
"determinacy".

I urge anyone really interested in this thread to get ahold of a
high-level treatment of the recent large cardinal stuff. It may
change your mind. It did change my outlook--- I was a thorough-going
skeptic about large cardinals before I read Maddy's article. I wasn't
a total novice before, either--- I'd been through Drake's book on
large cardinals that took the story up to about 1970.

Now, I wouldn't say I've become a rabid tub-thumping Platonist either.
But for most mathematicians, philosophy is not a straight-jacket, and
axioms don't have to pass ontological litmus tests to be regarded as
worthy of investigation.

If I may compare this work to the n-category stuff John Baez has been
talking about lately:

(a) In both cases, there's a "background story" to furnish
motivation. In other words, one doesn't simply crank out axioms
like lead from a mechanical pencil. For n-categories, the common
theme is the promotion of equalities to isomorphisms; for the
large cardinal axioms, the theme is a continuation of the
"cumuluative hierarchy" story often used to motivate the ZF
axioms.

To quote Mike Oliver: "Not only should the universe go on for ever
`vertically'; it should also be as `wide' as it possibly can, that
is, contain all possible subsets of any given set."

(b) In both cases, the theory has lead to some non-trivial results;
the whole structure has the coherence typical of "good" sets of
axioms.

(c) In neither case are we talking about an empire like topology or
group theory, where the Sun never sets on the applications.
That's a rather high bar to set though for *any* mathematical
theory, let alone one of recent vintage.

Of course, JB has reasons to believe that n-category theory may lead
to the next breakthrough in quantum gravity. I make no such claim for
the large cardinal results. But then, quantum gravity may not help me
debug my programs...

Toby Bartels

unread,
Jun 24, 1996, 3:00:00 AM6/24/96
to

Matthew P Wiener, who will someday have reasons for the following, wrote:

>Toby Bartels, who is a retard and a liar, wrote:

>>Matthew P Wiener, whose arguments are worthless, wrote:

>>>There has been no lack.

>>An argument worth anything would include an example.

>Why? Budnik is boring, just repeating his same old lies over and over again.

You find him interesting enough that you respond to him.
At any rate, there's no way Budnik could know better
if someone like you doesn't tell him why he's wrong.
So, I'm sorry, but he's not lying.
He's simply ignorant (assuming he is wrong).
If you care enough, you can give examples,
so that he ought to know better by then,
at which point he might become a liar.
But you haven't done that;
you care enough only to make an unwarranted insult.

If, however, you've had this conversation before,
and explained it to him then,
I retract the above paragraph.
In that case, I'd be interested in examples myself,
just for my own amusement of course.

>>>Paul Budnik, whose arguments may have worth, wrote:

>>(and they alternate)

>>>>>There is close enough to a general agreement.

>>>>Close enough for what?

>>>For a general agreement.

>>An argument worth anything would know the difference between `to' and `for'.

>Right.

OK, I'll be a nice guy and explain this to you.
I'll use simple words, so you can understand
even though you're pretending to be an idiot.

We have something which is not a general agreement.
However, it is close to a general agreement.
Now, when something is tall enough, or small enough,
or close enough to a general agreement,
there is always something it's enough *for*.
For example, I am tall enough *for* an amusement park ride.
Now, the question is:
What is there close enough to a general agreement *for*???
See how simple this is?

>>>>What does it mean to say ZF is true?

>>>That it is true.

>>An argument worth anything would answer the question.

>I did.

Gosh, what could I possibly say to this? except:
Matt, you're a liar.

>>An argument worth anything would distinguish gibberish from irrelevance.

>In Budnik's case, I do not care.

Fair enough.
Why you bothered to respond at all I don't know,
since you looked like a complete fool when you did so,
but I guess it released your tension or something.
Of course, if you don't care enough to read what he wrote,
I don't know where you'd get that tension from.
Maybe you just come on Usenet to release anger you get
from elsewhere in your life,
and Paul is just an innocent victim?
Who knows.

>>>>You have the evidence of history. [...]

>>>No we don't.

>>An argument worth anything would explain how Paul's analysis fails.

>Why bother? It's just his promise of some world to come, presented
>as if it were foregone fact.

For Christ's metaphorical sake, Matt,
he gave reasons for his promise.
You know, the part you cut?
Guess you lied again, at least as you would say.
You're getting good at this.


hugs and kisses,
Toby

Matthew P Wiener

unread,
Jun 25, 1996, 3:00:00 AM6/25/96
to

In article <4qn8as$8...@gap.cco.caltech.edu>, toby@ugcs (Toby Bartels),

who is still a retard and a liar, writes:
>Matthew P Wiener, who will someday have reasons for the following, wrote:

>>Why? Budnik is boring, just repeating his same old lies over and over again.

>You find him interesting enough that you respond to him. At any
>rate, there's no way Budnik could know better if someone like you
>doesn't tell him why he's wrong.

I have in the past, and I did give an example.

> So, I'm sorry, but he's not lying.

Sure he is.

>He's simply ignorant (assuming he is wrong). If you care enough, you
>can give examples, so that he ought to know better by then, at which
>point he might become a liar. But you haven't done that;

But I have.

>If, however, you've had this conversation before, and explained it to
>him then, I retract the above paragraph.

Ah, you catch on to the essence.

> In that case, I'd be
>interested in examples myself, just for my own amusement of course.

Determinacy is the most famous example. Solovay's model is another.
The normal Moore space conjecture is another.

>>>>>>There is close enough to a general agreement.

>>>>>Close enough for what?

>>>>For a general agreement.

>>>An argument worth anything would know the difference between `to' and `for'.

>>Right.

>We have something which is not a general agreement. However, it is


>close to a general agreement. Now, when something is tall enough, or

>small enough, or close enough to a general agreement, there is always


>something it's enough *for*. For example, I am tall enough *for* an

>amusement park ride. Now, the question is: What is there close


>enough to a general agreement *for*??? See how simple this is?

Yes.

>>>>>What does it mean to say ZF is true?

>>>>That it is true.

>>>An argument worth anything would answer the question.

>>I did.

>Gosh, what could I possibly say to this? except:
>Matt, you're a liar.

That answer is right there.

>>>An argument worth anything would distinguish gibberish from irrelevance.

>>In Budnik's case, I do not care.

>Why you bothered to respond at all I don't know,

Sometimes I don't know. Habit, mostly.

>>>>>You have the evidence of history. [...]

>>>>No we don't.

>>>An argument worth anything would explain how Paul's analysis fails.

>>Why bother? It's just his promise of some world to come, presented
>>as if it were foregone fact.

>For Christ's metaphorical sake, Matt, he gave reasons for his
>promise. You know, the part you cut? Guess you lied again, at least
>as you would say. You're getting good at this.

| You have the evidence of history. Properly applied computer


| technology can provide an enormously powerful aid to human intellect
| and imagination. We agree that understanding combinatorics is an
| important element of logic. Computers allow one to do enormously
| complex experiments to understand combinatorial structures. How
| could this not be an enormous aid? Only by maintaining arrogant
| fantasies about mystical intuition that comes from contemplating
| large cardinals!

These are "reasons"? These are propaganda. Computers are good at
some things, not at others. The evidence of history is that when
it comes to understanding certain Diaphontine equations (of the
Matiyasevich sort related to large cardinal consistency), computers
have been of zero help except, more recently, in their uses for word
processing and electronic communications, while the part that Budnik
raves against has made all the difference.

Matthew P Wiener

unread,
Jun 25, 1996, 3:00:00 AM6/25/96
to

In article <COLUMBUS.96...@pleides.osf.org>, columbus@pleides (Michael Weiss) writes:
>Now, I wouldn't say I've become a rabid tub-thumping Platonist either.
>But for most mathematicians, philosophy is not a straight-jacket, and
>axioms don't have to pass ontological litmus tests to be regarded as
>worthy of investigation.

The actual philosophy doesn't matter.

It is a matter of historical record, though, that Platonism has been
a highly motivating philosophy between much of set theory this past
century. (As opposed to simply being a majoritarian background
philosophy, as it is seems to be in mathematics as a whole.)

>If I may compare this work to the n-category stuff John Baez has been
>talking about lately:

Which reminds me. Another case of large cardinals impinging in real
mathematics is "Vopenka's Principle", which roughly speaking asserts
that in any variety (an algebraically defined category), one can find
one object which is a "logical" subobject of another. See, for example,
Adamek and Rosicky LOCALLY PRESENTABLE AND ACCESSIBLE CATEGORIES.

Keith Ramsay

unread,
Jun 25, 1996, 3:00:00 AM6/25/96
to

In article <4qn8as$8...@gap.cco.caltech.edu>, to...@ugcs.caltech.edu (Toby
Bartels) wrote:

Addressing some remarks to Matthew Wiener:

[...]


|At any rate, there's no way Budnik could know better
|if someone like you doesn't tell him why he's wrong.

|So, I'm sorry, but he's not lying.

I don't believe he is lying. He does, however, have the
option of improve his understanding by means other than
posting claims and waiting to be proved wrong. There is
quite a long tradition of alternative approaches....

Keith Ramsay

Paul Budnik

unread,
Jun 25, 1996, 3:00:00 AM6/25/96
to

Matthew P Wiener (wee...@sagi.wistar.upenn.edu) wrote:
: In article <4qn8as$8...@gap.cco.caltech.edu>, toby@ugcs (Toby Bartels),
: > In that case, I'd be

: >interested in examples myself, just for my own amusement of course.

: Determinacy is the most famous example. Solovay's model is another.
: The normal Moore space conjecture is another.

Is there any general agreement on the truth of these? Can you give
some indication of how important in a practical sense these are?

[...]

: | You have the evidence of history. Properly applied computer


: | technology can provide an enormously powerful aid to human intellect
: | and imagination. We agree that understanding combinatorics is an
: | important element of logic. Computers allow one to do enormously
: | complex experiments to understand combinatorial structures. How
: | could this not be an enormous aid? Only by maintaining arrogant
: | fantasies about mystical intuition that comes from contemplating
: | large cardinals!

: These are "reasons"? These are propaganda. Computers are good at


: some things, not at others.

A truly profound observation.

: The evidence of history is that when


: it comes to understanding certain Diaphontine equations (of the
: Matiyasevich sort related to large cardinal consistency), computers
: have been of zero help except, more recently, in their uses for word
: processing and electronic communications, while the part that Budnik
: raves against has made all the difference.

What do you think I am raving against? Up to a certain point ZF has
been enormously successful. I do not rave against it.

What history does teach us is that using computers to leverage
human intelligence leads to new and surprising results
that could never have been anticipated. By suggesting you know what can
be discovered in this way you only underscore your monumental arrogance
and ignorance.

I do not think we can decide the truth of extensions to ZF with
any confidence by mathematical meditation either alone or as part
of a community. We need to understand the combinatorial content
of ZF to go beyond it. That is impossible without effective use
of computers in ways that are common is most scientific
fields but sadly lacking in foundations research.

Toby Bartels

unread,
Jun 26, 1996, 3:00:00 AM6/26/96
to

Matthew P Wiener <wee...@sagi.wistar.upenn.edu> wrote:

>Toby Bartels <to...@ugcs.caltech.edu> wrote:

>>If, however, you've had this conversation before, and explained it to
>>him then, I retract the above paragraph.

>Ah, you catch on to the essence.

I always come around eventually,
except when you're wrong :-).

>Computers are good at
>some things, not at others. The evidence of history is that when


>it comes to understanding certain Diaphontine equations (of the
>Matiyasevich sort related to large cardinal consistency), computers
>have been of zero help except, more recently, in their uses for word
>processing and electronic communications, while the part that Budnik
>raves against has made all the difference.

Just what I was looking for!
(An argument with the strength of Paul's or greater.)
Thanks!

If this keeps up, we'll start liking each other.


-- Toby
to...@ugcs.caltech.edu

Toby Bartels

unread,
Jun 26, 1996, 3:00:00 AM6/26/96
to

Matthew P Wiener <wee...@sagi.wistar.upenn.edu> wrote:

>Toby Bartels <to...@ugcs.caltech.edu> wrote:

>>Platonists, at least to a formalist like myself,
>>seem to think there is a particular model which is true.

>Eh? Set theorists, for example, are generally platonists and are all
>acutely aware of the multiplicity of models.

And yet some say things such as that large cardinal axioms are true.
Now, if this means they are true in a certain model,
then I don't see how the Platonists are distinguished from formalists.
OTOH, if this means something absolute,
even though it may fail in some models,
then being aware of the multiplicity of models
doesn't seem to keep Platonists from thinking some are true and others false.


-- Toby
to...@ugcs.caltech.edu

Toby Bartels

unread,
Jun 26, 1996, 3:00:00 AM6/26/96
to

Michael Weiss <colu...@osf.org> wrote:

>I urge anyone really interested in this thread to get ahold of a
>high-level treatment of the recent large cardinal stuff. It may
>change your mind. It did change my outlook--- I was a thorough-going
>skeptic about large cardinals before I read Maddy's article. I wasn't
>a total novice before, either--- I'd been through Drake's book on
>large cardinals that took the story up to about 1970.

>(a) In both cases, there's a "background story" to furnish


> motivation. In other words, one doesn't simply crank out axioms
> like lead from a mechanical pencil. For n-categories, the common
> theme is the promotion of equalities to isomorphisms; for the
> large cardinal axioms, the theme is a continuation of the
> "cumuluative hierarchy" story often used to motivate the ZF
> axioms.

> To quote Mike Oliver: "Not only should the universe go on for ever
> `vertically'; it should also be as `wide' as it possibly can, that
> is, contain all possible subsets of any given set."

>(b) In both cases, the theory has lead to some non-trivial results;
> the whole structure has the coherence typical of "good" sets of
> axioms.

>(c) In neither case are we talking about an empire like topology or
> group theory, where the Sun never sets on the applications.
> That's a rather high bar to set though for *any* mathematical
> theory, let alone one of recent vintage.

These all sound like good reasons to me
to be interested in formal systems
that include large cardinal axioms.
Formalists should have no problem with this,
so where does the urge towards Platonism come in?

Somewhat facetiously:
Does `true' to a Platonist mean `interesting'?


-- Toby
to...@ugcs.caltech.edu

Matthew P Wiener

unread,
Jun 26, 1996, 3:00:00 AM6/26/96
to

In article <4qqhvs$v...@mtnmath.com>, paul@mtnmath (Paul Budnik) writes:
>Matthew P Wiener (wee...@sagi.wistar.upenn.edu) wrote:

>: In article <4qn8as$8...@gap.cco.caltech.edu>, toby@ugcs (Toby Bartels),

>: > In that case, I'd be
>: >interested in examples myself, just for my own amusement of course.

>: Determinacy is the most famous example. Solovay's model is another.
>: The normal Moore space conjecture is another.

>Is there any general agreement on the truth of these?

I have no idea. That certainly doesn't matter regarding the truth
itself.

> Can you give
>some indication of how important in a practical sense these are?

These results were highly praised in the related outside fields.

>: | You have the evidence of history. Properly applied computer
>: | technology can provide an enormously powerful aid to human intellect
>: | and imagination. We agree that understanding combinatorics is an
>: | important element of logic. Computers allow one to do enormously
>: | complex experiments to understand combinatorial structures. How
>: | could this not be an enormous aid? Only by maintaining arrogant
>: | fantasies about mystical intuition that comes from contemplating
>: | large cardinals!

>: These are "reasons"? These are propaganda. Computers are good at


>: some things, not at others.

>A truly profound observation.

For you, yes. For the rest of us, kind of ditzy.

>: The evidence of history is that when it comes to understanding


>: certain Diaphontine equations (of the Matiyasevich sort related to
>: large cardinal consistency), computers have been of zero help
>: except, more recently, in their uses for word processing and
>: electronic communications, while the part that Budnik raves against
>: has made all the difference.

>What do you think I am raving against? Up to a certain point ZF has


>been enormously successful. I do not rave against it.

So stick to the evidence of history, and quit inventing an imaginary
future as proof that you are right.

>What history does teach us is that using computers to leverage human
>intelligence leads to new and surprising results that could never
>have been anticipated.

Sometimes. Sometimes not. *That* is what history teaches us.

> By suggesting you know what can be discovered
>in this way you only underscore your monumental arrogance and ignorance.

I make no such suggestion. *YOU* do, and you suggest it as forgone
conclusion.

This is why you get flamed.

>I do not think we can decide the truth of extensions to ZF with
>any confidence by mathematical meditation either alone or as part
>of a community.

The community that studies these questions head on disagrees.

> We need to understand the combinatorial content
>of ZF to go beyond it.

That is what the community is working on.

> That is impossible without effective use
>of computers in ways that are common is most scientific
>fields but sadly lacking in foundations research.

Do you have any evidence? No. Are you gibbering and raving? Yes.

Matthew P Wiener

unread,
Jun 26, 1996, 3:00:00 AM6/26/96
to

In article <4qqp8u$k...@gap.cco.caltech.edu>, toby@ugcs (Toby Bartels) writes:
>Matthew P Wiener <wee...@sagi.wistar.upenn.edu> wrote:
>>Toby Bartels <to...@ugcs.caltech.edu> wrote:

>>>Platonists, at least to a formalist like myself,
>>>seem to think there is a particular model which is true.

>>Eh? Set theorists, for example, are generally platonists and are all
>>acutely aware of the multiplicity of models.

>And yet some say things such as that large cardinal axioms are true.

That is correct.

>Now, if this means they are true in a certain model, then I don't see
>how the Platonists are distinguished from formalists.

There would still be some differences, but much smaller ones.

> OTOH, if this
>means something absolute, even though it may fail in some models,
>then being aware of the multiplicity of models doesn't seem to keep
>Platonists from thinking some are true and others false.

Correct. But it is not a commitment to "a particular model which is
true". It's a recognizable landmark along the way.

Paul Budnik

unread,
Jun 27, 1996, 3:00:00 AM6/27/96
to
Keith Ramsay (Rams...@hermes.bc.edu) wrote:

: In article <4qn8as$8...@gap.cco.caltech.edu>, to...@ugcs.caltech.edu (Toby
: Bartels) wrote:

I have exercised that option. Of course we can always all
learn more but I do not know what claims I have made in this
thread that you think are mistaken.

Matthew P Wiener

unread,
Jun 28, 1996, 3:00:00 AM6/28/96
to
In article <4qqph7$k...@gap.cco.caltech.edu>, toby@ugcs (Toby Bartels) writes:
>These all sound like good reasons to me to be interested in formal
>systems that include large cardinal axioms. Formalists should have
>no problem with this, so where does the urge towards Platonism come
>in?

Why bother with these formal systems? Why pick one instead of any
other? (This is a question prior to set theory itself. It becomes
acute in set theory and large cardinals since there is no overarching
source you can point to for your favorite formal system. "Assume a
can opener", as the joke goes.)

For example, let us translate various consistency assertions into
exponential Diaphontine equations, after Robinson, Putnam, and Davis.

Thus, in addition to FLT, "Fermat's last theorem", we have ALT, BLT,
CLT, DLT, and ELT, where ALT corresponds to `w-huge cardinals are
consistent with ZFC', BLT to supercompact cardinals, CLT to countably
many Woodin cardinals, DLT to measurable cardinals, ELT to ZFC itself
being consistent.

These are rather complicated equations, but they could be written down
explicitly. (For some reason, there is a fetish to reduce everything
down to purely Diaphontine equations, with no exponent variables. The
resulting equations are much more complicated.)

It is known that ALT=>BLT=>CLT=>DLT=>ELT=>FLT. I would like to see a
formalist motivate bothering with DLT, for example. In principle, one
might always find a counterexample.

>Somewhat facetiously:
>Does `true' to a Platonist mean `interesting'?

Not at all.

Paul Budnik

unread,
Jun 28, 1996, 3:00:00 AM6/28/96
to
Benjamin J. Tilly (Benjamin...@dartmouth.edu) wrote:

: Given his several years of history on sci.physics, I am inclined to
: believe that Budnik would indeed lie, even about a point that has been
: repeatedly demonstrated to him. For this reason I long ago decided to
: ignore most of what he says...

Can you give a single example of something that has been clearly
demonstrated to me that I continued to question? You should not make
such outrageous claims unless you have something to back them up.

Steve Whittet

unread,
Jun 29, 1996, 3:00:00 AM6/29/96
to
In article <4qhq31$h...@gap.cco.caltech.edu>, to...@ugcs.caltech.edu says...

>
>Ilias Kastanas <ika...@alumnae.caltech.edu> wrote:
>
>>Toby Bartels <to...@ugcs.caltech.edu> wrote:
>
>>>Ilias Kastanas <ika...@alumnae.caltech.edu> wrote in part:
>
>>>>...statements of arithmetic; either An P(n), or En ~P(n)

>>>>in the integers, and one of the two is true -- whether a theory can
>>>>prove it or not.
>
>>>What does it mean mathematically to say such a statement is true?
>
>...snip...

>truth used in mathematics ...a true proposition is a theorem in that system


>(and falsehood may not be a strict opposite of truth).

>Another is similar, where a certain model is implied by the context.


>The last is in reference to an ideal Platonic truth,

It is interesting to see an interdisciplinary approach to this question.

I have heard it said that "truth is the relation between a proposition and
reality"

Logicians say you can either "speak the truth and say nothing, or tolerate
some ambiguity and argue" (Presumably mathematicians should choose
the former seeking form and physicists the later course seeking function)

I wonder what the implications are for such Heideggarian statements
as "The Nothing nothings" ?

Are time and space two examples of such a condition?

In Platonic terms, is there a difference between Being
correct and Becoming correct?

>and this is what much of the thread seems to have been about.


>The other concepts of truth aren't absolute.

Is it possible to have such a thing as a relative absolute?

How do we mathematically define the "Being" as differentiated
from the "Becoming" infinite? Consider the following instance.

An infinite and eternal continuum of space and time which in one
sense is mearly a set of coordinates and as much an abstraction
as a number system, and in another sense is spoken of as a
universal set which contains all matter and energy and
exhibits all of their properties

Is it meaningful to speak of it as having or lacking
homogeneous conditions throughout?

Is it isotropic?

Can it have a beginning or an end and still be
infinite and eternal?

Is our assessment of the truth of any such proposition
simply relative to our perception
dependant upon our position in the system?

>
>If you meant truth relative to the standard model of N
>(which could be implied by context recently),
>then I know what you mean. Otherwise I don't.

Can you express that in terms of a general mathematical statement?


>
>>P might as well be q != r, q and r polynomials with coefficients
>>in omega. Either some n makes them =, or none does.
>
>

How would you evaluate whether or not the statement was true or false?

>-- Toby
> to...@ugcs.caltech.edu

steve


It is loading more messages.
0 new messages