Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

This Week's Finds in Mathematical Physics (Week 236)

251 views
Skip to first unread message

John Baez

unread,
Jul 26, 2006, 12:00:17 PM7/26/06
to

Also available at http://math.ucr.edu/home/baez/week236.html

July 26, 2006
This Week's Finds in Mathematical Physics (Week 236)
John Baez

This week I'd like to catch you up on some papers about
categorification and quantum mechanics.

But first, since it's summer vacation, I'd like to take you on
a little road trip - to infinity. And then, for fun, a little
detective story about the history of the icosahedron.

Cantor invented two kinds of infinities: cardinals and ordinals.
Cardinals are more familiar. They say how big sets are. Two sets
can be put into 1-1 correspondence iff they have the same number of
elements - where this kind of "number" is a cardinal.

But today I want to talk about ordinals. Ordinals say how big
"well-ordered" sets are. A set is well-ordered if it's linearly
ordered and every nonempty subset has a smallest element.

For example, the empty set

{}

is well-ordered in a trivial sort of way, and the corresponding
ordinal is called

0.

Similarly, any set with just one element, like this:

{0}

is well-ordered in a trivial sort of way, and the corresponding
ordinal is called

1.

Similarly, any set with two elements, like this:

{0,1}

becomes well-ordered as soon as we decree which element is bigger;
the obvious choice is to say 0 < 1. The corresponding ordinal is
called

2.

Similarly, any set with three elements, like this:

{0,1,2}

becomes well-ordered as soon as we linearly order it; the obvious
choice here is to say 0 < 1 < 2. The corresponding ordinal is called

3.

Perhaps you're getting the pattern - you've probably seen these
particular ordinals before, maybe sometime in grade school.
They're called finite ordinals, or "natural numbers".

But there's a cute trick they probably didn't teach you then:
we can *define* each ordinal to *be* the set of all ordinals
less than it:

0 = {} (since no ordinal is less than 0)
1 = {0} (since only 0 is less than 1)
2 = {0,1} (since 0 and 1 are less than 2)
3 = {0,1,2} (since 0, 1 and 2 are less than 3)

and so on. It's nice because now each ordinal *is* a
well-ordered set of the size that ordinal stands for.
And, we can define one ordinal to be "less than or equal" to
another precisely when its a subset of the other.

Now, what comes after all the finite ordinals? Well,
the set of all finite ordinals is itself well-ordered:

{0,1,2,3,...}

So, there's an ordinal corresponding to this - and it's the first
*infinite* ordinal. It's usually called omega. Using the cute
trick I mentioned, we can actually define

omega = {0,1,2,3,...}

Now, what comes after this? Well, it turns out there's a
well-ordered set

{0,1,2,3,...,omega}

containing the finite ordinals together with omega, with the
obvious notion of "less than": omega is bigger than the rest.
Corresponding to this set there's an ordinal called

omega+1

As usual, we can simply define

omega+1 = {0,1,2,3,...,omega}

(At this point you could be confused if you know about cardinals,
so let me throw in a word of reassurance. The sets omega and
omega+1 have the same "cardinality", but they're different as
ordinals, since you can't find a 1-1 and onto function between
them that *preserves the ordering*. This is easy to see, since
omega+1 has a biggest element while omega does not.)

Now, what comes next? Well, not surprisingly, it's

omega+2 = {0,1,2,3,...,omega,omega+1}

Then comes

omega+3, omega+4, omega+5,...

and so on. You get the idea.

What next?

Well, the ordinal after all these is called omega+omega.
People often call it "omega times 2" or "omega 2" for short. So,

omega 2 = {0,1,2,3,...,omega,omega+1,omega+2,omega+3,....}

What next? Well, then comes

omega 2 + 1, omega 2 + 2,...

and so on. But you probably have the hang of this already, so
we can skip right ahead to omega 3.

In fact, you're probably ready to skip right ahead to omega 4,
and omega 5, and so on.

In fact, I bet now you're ready to skip all the way to
"omega times omega", or "omega squared" for short:

omega^2 =

{0,1,2...omega,omega+1,omega+2,...,omega2,omega2+1,omega2+2,...}

It would be fun to have a book with omega pages, each page half
as thick as the previous page. You can tell a nice long story
with an omega-sized book. But it would be even more fun to have
an encyclopedia with omega volumes, each being an omega-sized book,
each half as thick as the previous volume. Then you have omega^2
pages - and it can still fit in one bookshelf!

What comes next? Well, we have

omega^2+1, omega^2+2, ...

and so on, and after all these come

omega^2+omega, omega^2+omega+1, omega^2+omega+2, ...

and so on - and eventually

omega^2 + omega^2 = omega^2 2

and then a bunch more, and then

omega^2 3

and then a bunch more, and then

omega^2 4

and then a bunch more, and more, and eventually

omega^2 omega = omega^3.

You can probably imagine a bookcase containing omega encyclopedias,
each with omega volumes, each with omega pages, for a total of
omega^3 pages.

I'm skipping more and more steps to keep you from getting bored.
I know you have plenty to do and can't spend an *infinite* amount
of time reading This Week's Finds, even if the subject is infinity.

So, if you don't mind me just mentioning some of the high points,
there are guys like omega^4 and omega^5 and so on, and after all
these comes

omega^omega.

And then what?

Well, then comes omega^omega + 1, and so on, but I'm sure
that's boring by now. And then come ordinals like

omega^omega 2,..., omega^omega 3, ..., omega^omega 4, ...

leading up to

omega^omega omega = omega^{omega + 1}

Then eventually come ordinals like

omega^omega omega^2 , ..., omega^omega omega^3, ...

and so on, leading up to:

omega^omega omega^omega = omega^{omega + omega} = omega^{omega 2}

This actually reminds me of something that happened driving across
South Dakota one summer with a friend of mine. We were in college,
so we had the summer off, so we drive across the country. We drove
across South Dakota all the way from the eastern border to the west
on Interstate 90.

This state is huge - about 600 kilometers across, and most of it is
really flat, so the drive was really boring. We kept seeing signs
for a bunch of tourist attractions on the western edge of the state,
like the Badlands and Mt. Rushmore - a mountain that they carved
to look like faces of presidents, just to give people some reason to keep
driving.

Anyway, I'll tell you the rest of the story later - I see some more
ordinals coming up:

omega^{omega 3},... omega^{omega 4},... omega^{omega 5},...

We're really whizzing along now just to keep from getting bored - just
like my friend and I did in South Dakota. You might fondly imagine
that we had fun trading stories and jokes, like they do in road movies.
But we were driving all the way from Princeton to my friend Chip's
cabin in California. By the time we got to South Dakota, we were all
out of stories and jokes.

Hey, look! It's

omega^{omega omega} = omega^{omega^2}

That was cool. Then comes

omega^{omega^3}, ... omega^{omega^4}, ... omega^{omega^5}, ...

and so on.

Anyway, back to my story. For the first half of our half of our
trip across the state, we kept seeing signs for something called
the South Dakota Tractor Museum.

Oh, wait, here's an interesting ordinal - let's slow down and
take a look:

omega^{omega^omega}

I like that! Okay, let's keep driving:

omega^{omega^omega} + 1, omega^{omega^omega} + 2, ...

and then

omega^{omega^omega} + omega, ..., omega^{omega^omega} + omega 2, ...

and then

omega^{omega^omega} + omega^2, ..., omega^{omega^omega} + omega^3, ...

and eventually

omega^{omega^omega} + omega^omega

and eventually

omega^{omega^omega} + omega^{omega^omega} = omega^{omega^omega} 2

and then

omega^{omega^omega} 3, ..., omega^{omega ^ omega} 4, ...

and eventually

omega^{omega^omega} omega = omega^{omega^omega + 1}

and then

omega^{omega^omega + 2}, ..., omega^{omega^omega + 3}, ...

This is pretty boring; we're already going infinitely fast,
but we're still just picking up speed, and it'll take a while
before we reach something interesting.

Anyway, we started getting really curious about this South Dakota
Tractor Museum - it sounded sort of funny. It took 250 kilometers
of driving before we passed it. We wouldn't normally care about
a tractor museum, but there was really nothing else to think about
while we were driving. The only thing to see were fields of grain,
and these signs, which kept building up the suspense, saying things
like "ONLY 100 MILES TO THE SOUTH DAKOTA TRACTOR MUSEUM!"

We're zipping along really fast now:

omega^{omega^{omega^omega}}, ... omega^{omega^{omega^{omega^omega}}},...

What comes after all these?

At this point we need to stop for gas. Our notation for ordinals
runs out at this point!

The ordinals don't stop; it's just our notation that gives out.
The set of all ordinals listed up to now - including all the ones
we zipped past - is a well-ordered set called

epsilon_0

or "epsilon-nought". This has the amazing property that

epsilon_0 = omega^{epsilon_0}

And, it's the smallest ordinal with this property.

In fact, all the ordinals smaller than epsilon_0 can be drawn as
trees. You write them in "Cantor normal form" like this:

omega^{omega^omega + omega} + omega^omega + omega + omega + 1 + 1 + 1

using just + and exponentials and 1 and omega, and then you turn
this notation into a picture of a tree. I'll leave it as a puzzle
to figure out how.

So, the set of (finite, rooted) trees becomes a well-ordered set
whose ordinal is epsilon_0. Trees are important in combinatorics
and computer science, so epsilon_0 is not really so weird after all.

Another cool thing is that Gentzen proved the consistency of the
usual axioms for arithmetic - "Peano arithmetic" - with the help
of epsilon_0. He did this by drawing proofs as trees, and using
this to give an inductive argument that there's no proof in Peano
arithmetic that 0 = 1. But, this inductive argument goes beyond
the simple kind you use to prove facts about all natural numbers.
It uses induction up to epsilon_0.

You can't formalize Gentzen's argument in Peano arithmetic: thanks
to Goedel, this system can't proof itself consistent unless it's *not*.
I used to think this made Gentzen's proof pointless, especially since
"induction up to epsilon_0" sounded like some sort of insane logician's
extrapolation of ordinary mathematical induction.

But now I see that induction up to epsilon_0 can be thought of as
induction on trees, and it seems like an obviously correct principle.
Of course Peano's axioms also seem obviously correct, so I don't know
that Gentzen's proof makes me *more sure* Peano arithmetic is
consistent. But, it's interesting.

Induction up to epsilon_0 also lets you prove other stuff you
can't prove with just Peano arithmetic. For example, it lets you
prove that every Goodstein sequence eventually reaches zero!

Huh?

To write down a Goodstein sequence, you start with any natural
number and write it in "recursive base 2", like this:

2^{2^2 + 1} + 2^1

Then you replace all the 2's by 3's:

3^{3^3 + 1} + 3^1

Then you subtract 1 and write the answer in "recursive base 3":

3^{3^3 + 1} + 1 + 1

Then you replace all the 3's by 4's, subtract 1 and write the
answer in recursive base 4. Then you replace all the 4's by
5's, subtract 1 and write the answer in recursive base 5. And so on.

At first these numbers seem to keep getting bigger! So, it seems
shocking at first that they eventually reach zero. For example,
if you start with the number 4, you get this Goodstein sequence:

4, 26, 41, 60, 41, 60, 83, 109, 139, 173, 211, 253, 299, 348, ...

and apparently it takes about 3 x 10^{60605351} steps to reach zero!
You can try examples yourself on this applet:

1) National Curve Bank, Goodstein's theorem,
http://curvebank.calstatela.edu/goodstein/goodstein.htm

But if you think about it the right way, it's obvious that every Goodstein
sequence *does* reach zero.

The point is that these numbers in "recursive base n" look a lot
like ordinals in Cantor normal form. If we translate them into
ordinals by replacing n by omega, the ordinals keep getting smaller
at each step, even when the numbers get bigger!

For example, when we do the translation

2^{2^2 + 1} + 2 |-> omega^{omega^omega + 1} + omega^1

3^{3^3 + 1} + 1 + 1 |-> omega^{omega^omega + 1} + 1 + 1

we see the ordinal got smaller even though the number got bigger.
Since epsilon_0 is well-ordered, the ordinals must bottom out at zero
after a finite number of steps - that's what "induction up to epsilon_0"
tells us. So, the numbers must too!

In fact, Kirby and Paris showed that you *need* induction up to
epsilon_0 to prove Goodstein sequences always converge to zero.
Since you can't do induction up to epsilon_0 in Peano arithmetic,
thanks to Goedel and Gentzen, it follows that Peano arithmetic is
unable to prove the Goodstein sequences go to zero (unless Peano
arithmetic is inconsistent).

So, this is a nice example of a fact about arithmetic that's obvious
if you think about it for a while, but not provable in Peano arithmetic.

I don't know any results in mathematical physics that use induction
up to epsilon_0, but these could be one - after all, trees show up
in the theory of Feynman diagrams. That would be pretty interesting.

There's a lot more to say about this, but I hear what you're asking:
what comes after epsilon_0?

Well, duh! It's

epsilon_0 + 1

Then comes

epsilon_0 + 2

and then eventually we get to

epsilon_0 + omega

and then

epsilon_0 + omega^2,..., epsilon_0 + omega^3,... , epsilon_0 + omega^4,...

and after a long time

epsilon_0 + epsilon_0 = epsilon_0 2

and then eventually

epsilon_0^2

and then eventually...

Oh, I see! You want to know the first *really interesting* ordinal
after epsilon_0.

Well, this is a matter of taste, but you might be interested in
epsilon_1. This is the first ordinal after epsilon_0 that satisfies
this equation:

x = omega^x

How do we actually reach this ordinal? Well, just as epsilon_0
was the limit of this sequence:

omega, omega^omega, omega^{omega^omega}, omega^{omega^{omega^omega}},...

epsilon_1 is the limit of this:

epsilon_0 + 1, omega^{epsilon_0 + 1}, omega^{omega^{epsilon_0 + 1}},...

In other words, it's the *union* of all these well-ordered sets.

In what sense is epsilon_1 the "first really interesting ordinal" after
epsilon_0? I'm not sure! Maybe it's the first one that can't be
built out of 1, omega and epsilon_0 using finitely many additions,
multiplications and exponentiations. Does anyone out there know?

Anyway, the next really interesting ordinal I know after epsilon_1 is
epsilon_2. It's the next solution of

x = omega^x

and it's defined to be the limit of this sequence:

epsilon_1 + 1, omega^{epsilon_1 + 1}, omega^{omega^{epsilon_1 + 1}},...

Maybe now you get the pattern. In general, epsilon_alpha is the
alpha-th solution of

x = omega^x

and we can define this, if we're smart, for any ordinal alpha.

So, we can keep driving on through fields of ever larger ordinals:

epsilon_2,..., epsilon_3,..., epsilon_4, ...

and eventually

epsilon_omega,..., epsilon_{omega+1},..., epsilon_{omega+2},...

and eventually

epsilon_{omega^2},..., epsilon_{omega^3},..., epsilon_{omega^4},...

and eventually

epsilon_{omega^omega},..., epsilon_{omega^{omega^omega}},...

As you can see, this gets boring after a while - it's suspiciously
similar to the beginning of our trip through the ordinals, with
them now showing up as subscripts under this "epsilon" notation.
But this is misleading: we're moving much faster now. I'm skipping
over much bigger gaps, not bothering to mention all sorts of ordinals
like

epsilon_{omega^omega} + epsilon_{omega 248} + omega^{omega^{omega + 17}}

Anyway... so finally we *got* to this South Dakota Tractor Museum,
driving pretty darn fast at this point, about 85 miles an hour...
and guess what?

Oh - wait a minute - it's sort of interesting here:

epsilon_{epsilon_0},..., epsilon_{epsilon_1},..., epsilon_{epsilon_2}, ...

and now we reach

epsilon_{epsilon_omega}

and then

epsilon_{epsilon_{omega^omega}},...,

epsilon_{epsilon_{omega^{omega^omega}}},...

and then as we keep speeding up, we see:

epsilon_{epsilon_{epsilon_0},...

epsilon_{epsilon_{epsilon_{epsilon_0}}},...

epsilon_{epsilon_{epsilon_{epsilon_{epsilon_0}}}},...

So, by the time we got that tractor museum, we were driving really fast.
And, all we saw as we whizzed by was a bunch of rusty tractors out in
a field! It was over in a split second! It was a real anticlimax -
just like this little anecdote, in fact.

But that's the way it is when you're driving through these ordinals.
Every ordinal, no matter how large, looks pretty pathetic and small
compared to the ones ahead - so you keep speeding up, looking for a
really big one... and when you find one, you see it's part of a new
pattern, and that gets boring too...

Anyway, when we reach the limit of this sequence

epsilon_0,

epsilon_{epsilon_0},

epsilon_{epsilon_{epsilon_0},

epsilon_{epsilon_{epsilon_{epsilon_0}}},

epsilon_{epsilon_{epsilon_{epsilon_{epsilon_0}}}},...

our notation breaks down, since this is the first solution of

x = epsilon_x

We could make up a new name for this ordinal, like eta_0.

Then we could play the whole game again, defining eta_{alpha} to be
the alpha-th solution of

x = epsilon_x

sort of like how we defined the epsilons. This kind of equation, where
something equals some function of itself, is called a "fixed point"
equation.

But since we'll have to play this game infinitely often, we might
as well be more systematic about it!

As you can see, we keep running into new, qualitatively different types
of ordinals. First we ran into the powers of omega, then we ran into
the epsilons, and now these etas. It's gonna keep happening! For
each type of ordinal, our notation runs out when we reach the first
"fixed point" - when the xth ordinal of this type is actually equal to
x.

So, instead of making up infinitely many Greek letters, let's use
phi_gamma for the gamma-th type of ordinal, and phi_gamma(alpha) for
the alpha-th ordinal of type gamma.

We can use the fixed point equation to define phi_{gamma+1} in terms
of phi_gamma. In other words, we start off by defining

phi_0(alpha) = omega^alpha

and then define

phi_{gamma+1}(alpha)

to be the alpha-th solution of

x = phi_{gamma}(x)

We can even define this stuff when gamma itself is infinite.
For a more precise definition see the Wikipedia article cited below...
but I hope you get the rough idea.

This defines a lot of really big ordinals, called the "Veblen hierarchy".

There's a souped-up version of Cantor normal form that can handle
every ordinal that's a finite sum of guys in the Veblen hierarchy:
you can write them *uniquely* as finite sums of the form

phi_{gamma_1}(alpha_1) + ... + phi_{gamma_k}(alpha_k)

where each term is less than or equal to the previous one, and each
alpha_i is not a fixed point of phi_{gamma_i}.

But as you might have suspected, not *all* ordinals can be written
in this way. For one thing, every ordinal we've reached so far is
*countable*: as a set you can put it in one-to-one correspondence
with the integers. There are much bigger *uncountable* ordinals -
at least if you believe you can well-order uncountable sets.

But even in the realm of the countable, we're nowhere near done!

As I hope you see, the power of the human mind to see a pattern
and formalize it gives the quest for large countable ordinals a
strange quality. As soon as we see a systematic way to generate
a sequence of larger and larger ordinals, we know this sequence
has a limit that's larger then all of those! And this opens the
door to even larger ones....

So, this whole journey feels a bit like trying to run away from
your own shadow: the faster you run, the faster it chases after you.
But, it's interesting to hear what happens next. At this point we
reach something a bit like the Badlands on the western edge of South
Dakota - something a bit spooky!

It's called the Feferman-Schuette ordinal, Gamma_0. This is just
the limit, or union if you prefer, of all the ordinals mentioned
so far: all the ones you can get from the Veblen hierarchy. You
can also define Gamma_0 by a fixed point property: it's the smallest
ordinal x with

phi_x(0) = x

Now, we've already seen that induction up to different ordinals
gives us different amounts of mathematical power: induction up
to omega is just ordinary mathematical induction as formalized by
Peano arithmetic, but induction up to epsilon_0 buys us more -
it lets us prove the consistency of Peano arithmetic!

Logicians including Feferman and Schuette have carried out a detailed
analysis of this subject. They know a lot about how much induction
up to different ordinals buys you. And apparently, induction up to
Gamma_0 lets us prove the consistency of a system called "predicative
analysis". I don't understand this, nor do I understand the claim
I've seen that Gamma_0 is the first ordinal that cannot be defined
predicatively - i.e., can't be defined without reference to itself.
Sure, saying Gamma_0 is the first solution of

phi_x(0) = x

is non-predicative. But what about saying that Gamma_0 is the union
of all ordinals in the Veblen hierarchy? What's non-predicative
about that?

If anyone could explain this in simple terms, I'd be much obliged.

As you can see, I'm getting out my depth here. That's pretty typical
in This Week's Finds, but this time - just to shock the world -
I'll take it as a cue to shut up. So, I won't try to explain the
outrageously large Bachmann-Howard ordinal, or the even more
outrageously large Church-Turing ordinal - the first one that can't
be written down using *any* computable system of notation. You'll
just have to read the references.

I urge you to start by reading the Wikipedia article on ordinal
numbers, then the article on ordinal arithmetic, and then the one
on large countable ordinals - they're really well-written:

2) Wikipedia, Ordinal numbers,
http://en.wikipedia.org/wiki/Ordinal_number

Ordinal arithmetic,
http://en.wikipedia.org/wiki/Ordinal_arithmetic

Large countable ordinals,
http://en.wikipedia.org/wiki/Large_countable_ordinals

The last one has a tempting bibliography, but warns us that most
books on this subject are hard to read and out of print. Apparently
nobody can agree on notation for ordinals beyond the Veblen hierarchy,
either.

Gentzen proved the consistency of Peano arithmetic in 1936:

3) Gerhard Gentzen, Die Widerspruchfreiheit der reinen Zahlentheorie,
Mathematische Annalen 112 (1936), 493-565. Translated as "The
consistency of arithmetic" in M. E. Szabo ed., The Collected Works
of Gerhard Gentzen, North-Holland, Amsterdam, 1969.

Goodstein's theorem came shortly afterwards:

4) R. Goodstein, On the restricted ordinal theorem, Journal of
Symbolic Logic, 9 (1944), 33-41.

but Kirby and Paris proved it independent of Peano arithmetic
only in 1982:

5) L. Kirby and J. Paris, Accessible independence results for Peano
arithmetic, Bull. London. Math. Soc. 14 (1982), 285-93.

That marvelous guy Alan Turing wrote his PhD thesis at Princeton
under the logician Alonzo Church. It was about ordinals and their
relation to logic:

6) Alan M. Turing, Systems of logic defined by ordinals, Proc.
London Math. Soc., Series 2, 45 (1939), 161-228.

This is regarded as his most difficult paper. The idea is to
take a system of logic like Peano arithmetic and throw in an
extra axiom saying that system is consistent, and then another
axiom saying *that* system is consistent, and so on ad infinitum -
getting a new system for each ordinal. These systems are recursively
axiomatizable up to (but not including) the Church-Turing ordinal.

These ideas were later developed much further....

But, reading original articles is not so easy, especially if you're
in Shanghai without access to a library. So, what about online stuff -
especially stuff for the amateur, like me?

Well, this article is great fun if you're looking for a readable
overview of the grand early days of proof theory, when Hilbert was
battling Brouwer, and then Goedel came and blew everyone away:

7) Jeremy Avigad and Erich H. Reck, "Clarifying the nature of the
infinite": the development of metamathematics and proof theory,
Carnegie-Mellon Technical Report CMU-PHIL-120, 2001. Also
available as http://www.andrew.cmu.edu/user/avigad/Papers/infinite.pdf

But, it doesn't say much about the newer stuff, like the idea that
induction up to a given ordinal can prove the consistency of a logical
system - the bigger the ordinal, the stronger the system. For work
up to 1960, this is a good overview:

8) Solomon Feferman, Highlights in proof theory, in Proof Theory,
eds. V. F. Hendricks et al, Kluwer, Dordrecht (2000), pp. 11-31.
Also available at http://math.stanford.edu/~feferman/papers.html

For newer stuff, try this:

9) Solomon Feferman, Proof theory since 1960, prepared for the
Encyclopedia of Philosophy Supplement, Macmillan Publishing Co.,
New York. Also available at
http://math.stanford.edu/~feferman/papers.html

Also try the stuff on proof theory, trees and categories mentioned
in "week227", and the book by Girard, Lafont and Taylor mentioned
in "week94".

Finally, sometime I want to get ahold of this book by someone who
always enlivened logic discussions on the internet until his death in
April this year:

10) Torkel Franzen, Inexhaustibility: A Non-Exhaustive Treatment,
Lecture Notes in Logic 16, A. K. Peters, Ltd., 2004.

The blurb sounds nice: "The inexhaustibility of mathematical
knowledge is treated based on the concept of transfinite
progressions of theories as conceived by Turing and Feferman."

Okay, now for a bit about the icosahedron - my favorite Platonic solid.

I've been thinking about the "geometric McKay correspondence" lately,
and among other things this sets up a nice relationship between the
symmetry group of the icosahedron and an amazing entity called E8.
E8 is the largest of the exceptional Lie groups - it's 248-dimensional.
It's related to the octonions (the number "8" is no coincidence) and
it shows up in string theory. It's very beautiful how this complicated
sounding stuff can be seen in distilled form in the icosahedron.

I have a lot to say about this, but you're probably worn out by our
road trip through the land of big ordinals. So for now, try "week164"
and "week230" if you're curious. Let's talk about something less
stressful - the early history of the icosahedron.

I spoke about the early history of the dodecahedron in "week63".
It's conjectured that the Greeks got interested in this shape
from looking at crystals of iron pyrite. These aren't regular
dodecahedra, since normal crystals can't have 5-fold symmetry -
though "quasicrystals" can. Instead, they're "pyritohedra".
The Greeks' love of mathematical perfection led them to the
regular dodecahedron....

... and it also led them to invent the icosahedron:

11) Benno Artmann, About the cover: the mathematical conquest of
the third dimension, Bulletin of the AMS, 43 (2006), 231-235.
Also available at
http://www.ams.org/bull/2006-43-02/S0273-0979-06-01111-6/

According to Artmann, an ancient note written in the margins of a copy
of Euclid's Elements says the regular icosahedron and octahedron
were discovered by Theaetetus!

If you're a cultured sort, you may know Theaetetus through Plato's
dialog of the same name, where he's described as a mathematical
genius. He's also mentioned in Plato's "The Sophist". He probably
discovered the icosahedron between 380 and 370 BC, and died at an
early age in 369. Euclid wrote his construction of the icosahedron
that we find in Euclid's Elements:

12) Euclid, Elements, Book XIII, Proposition 16, online version
due to David Joyce at
http://aleph0.clarku.edu/~djoyce/java/elements/bookXIII/propXIII16.html

Artmann says this was the first time a geometrical entity appeared
in pure thought before it was seen! An interesting thought.

Book XIII also contains a complete classification of the Platonic
solids - perhaps the first really interesting classification
theorem in mathematics, and certainly the first "ADE classification":

13) Euclid, Elements, Book XIII, Proposition 18, online version
due to David Joyce at
http://aleph0.clarku.edu/~djoyce/java/elements/bookXIII/propXIII18.html

If you don't know about ADE classifications, see "week62".

I got curious about this "ancient note written in the margins of a
copy of Euclid" that Artmann mentions. It seemed too good to be true.
Just for fun, I tried to track down the facts about this, using only
my web browser here in Shanghai.

First of all, if you're imagining an old book in a library somewhere
with marginal notes scribbled by a pal of Theaetetus, dream on.
It ain't that simple! Our knowledge of Euclid's original Elements
relies on copies of copies of copies... and centuries of detective
work, with each detective having to root through obscure journals
and dim-lit library basements to learn what the previous detectives
did.

The oldest traces of Euclid's Elements are pathetic fragments of
papyrus. People found some in a library roasted by the eruption
of Mount Vesuvius in 79 AD, some more in a garbage dump in the
Egyptian town of Oxyrhynchus (see "week221"), and a couple more in
the Fayum region near the Nile. All these were written centuries
after Euclid died. For a look at one, try this:

14) Bill Casselman, One of the oldest extant diagrams from Euclid,
http://www.math.ubc.ca/~cass/Euclid/papyrus/

The oldest nearly complete copy of the Elements lurks in a museum
called the Bodleian at Oxford. It dates back to 888 AD, about a
millennium after Euclid.

More copies date back to the 10th century; you can find their stories
here:

15) Thomas L. Heath, editor, Euclid's Elements, chap. V: the text,
Cambridge U. Press, Cambridge, 1925. Also available at
http://www.perseus.tufts.edu/cgi-bin/ptext?lookup=Euc.+5

16) Menso Folkerts, Euclid's Elements in Medieval Europe,
http://www.math.ubc.ca/~cass/Euclid/folkerts/folkerts.html

All these copies are somewhat different. So, getting at Euclid's
original Elements is as hard as sequencing the genome of Neanderthal
man, seeing a quark, or peering back to the Big Bang!

A lot of these copies contain "scholia": comments inserted by
various usually unnamed copyists. These were collected and
classified by a scholar named Heiberg in the late 1800s:

17) Thomas L. Heath, editor, Euclid's Elements, chap. VI: the scholia,
Cambride U. Press, Cambridge, 1925. Also available at
http://www.perseus.tufts.edu/cgi-bin/ptext?lookup=Euc.+6

One or more copies contains a scholium about Platonic solids in
book XIII. Which copies? Ah, for that I'll have to read Heiberg's
book when I get back to UC Riverside - our library has it, I'm
proud to say.

And, it turns out that another scholar named Hultsch argued
that this scholium was written by Geminus of Rhodes.

Geminus of Rhodes was an astronomer and mathematician who may have
lived between 130 and 60 BC. He seems like a cool dude. In his
Introduction to Astronomy, he broke open the "celestial sphere",
writing:

... we must not suppose that all the stars lie on one surface,
but rather that some of them are higher and some are lower.

And in his Theory of Mathematics, he proved a classification theorem
stating that the helix, the circle and the straight line are the only
curves for which any portion is the same shape as any other portion
with the same length.

Anyway, the first scholium in book XIII of Euclid's Elements, which
Hultsch attributes to Geminus, mentions

... the five so-called Platonic figures which, however, do not
belong to Plato, three of the five being due to the Pythagoreans,
namely the cube, the pyramid, and the dodecahedron, while the
octahedron and the icosahedron are due to Theaetetus.

So, that's what I know about the origin of the icosahedron!
Someday I'll read more, so let me make a note to myself:

18) Benno Artmann, Antike Darstellungen des Ikosaeders, Mitt.
DMV 13 (2005), 45-50. (Here the drawing of the icosahedron in
Euclid's elements is analysed in detail.)

19) A. E. Taylor, Plato: the Man and His Work, Dover Books, New
York, 2001, page 322. (This discusses traditions concerning
Theaetetus and Platonic solids.)

20) Euclid, Elementa: Libri XI-XIII cum appendicibus, postscript
by Johan Ludvig Heiberg, edited by Euangelos S. Stamatis,
Teubner BSB, Leipzig, 1969. (Apparently this contains information
on the scholium in book XIII of the Elements.)

Now for something a bit newer: categorification and quantum mechanics.
I've said so much about this already that I'm pretty much talked out:

21) John Baez and James Dolan, From finite sets to Feynman diagrams,
in Mathematics Unlimited - 2001 and Beyond, vol. 1, eds. Bjoern
Engquist and Wilfried Schmid, Springer, Berlin, 2001, pp. 29-50.

22) John Baez and Derek Wise, Quantization and Categorification,
Quantum Gravity Seminar lecture notes, available at:
http://math.ucr.edu/home/baez/qg-fall2003/
http://math.ucr.edu/home/baez/qg-winter2004/
http://math.ucr.edu/home/baez/qg-spring2004/

As I explained in "week185", many basic facts about harmonic
oscillators, Fock space and Feynman diagrams have combinatorial
interpretations. For example, the commutation relation between
the annihilation operator a and the creation operator a*:

aa* - a*a = 1

comes from the fact that if you have some balls in a box, there's one
more way to put a ball in and then take one out than to take one out
and then put one in! This way of thinking amounts to using finite
sets as a substitute for the usual eigenstates of the number operator,
so we're really "categorifying" the harmonic oscillator: giving it a
category of states instead of a set of states.

Working out the detailed consequences takes us through Joyal's
theory of "structure types" or "species" - see "week202" - and
on to more general "stuff types". Some nice category and
2-category theory is needed to make the ideas precise. For a
careful treatment, see this thesis by a student of Ross Street:

23) Simon Byrne, On Groupoids and Stuff, honors thesis,
Macquarie University, 2005, available at
http://www.maths.mq.edu.au/~street/ByrneHons.pdf and
http://math.ucr.edu/home/baez/qg-spring2004/ByrneHons.pdf

However, none of this work dealt with the all-important *phases*
in quantum mechanics! For that, we'd need a generalization of
finite sets whose cardinality can be be complex. And that's what
my student Jeffrey Morton introduces here:

24) Jeffrey Morton, Categorified algebra and quantum mechanics,
to appear in Theory and Application of Categories. Also available
as math.QA/0601458.

He starts from the beginning, explains how and why one would
try to categorify the harmonic oscillator, introduces the
"U(1)-sets" and "U(1)-stuff types" needed to do this, and shows
how the usual theorem expressing time evolution of a perturbed
oscillator as a sum over Feynman diagrams can be categorified.
His paper is now *the* place to read about this subject. Take
a look!

-----------------------------------------------------------------------
Previous issues of "This Week's Finds" and other expository articles on
mathematics and physics, as well as some of my research papers, can be
obtained at

http://math.ucr.edu/home/baez/

For a table of contents of all the issues of This Week's Finds, try

http://math.ucr.edu/home/baez/twfcontents.html

A simple jumping-off point to the old issues is available at

http://math.ucr.edu/home/baez/twfshort.html

If you just want the latest issue, go to

http://math.ucr.edu/home/baez/this.week.html

david.c...@tuebingen.mpg.de

unread,
Jul 27, 2006, 11:00:17 AM7/27/06
to
> For that, we'd need a generalization of finite sets whose cardinality can be be complex.

Has anyone since done anything with the idea
http://groups.google.co.uk/group/sci.math.research/browse_frm/thread/be51a2eadc7ed8d2/7fd3ff8198196ace
that the set of Motzkin paths has cardinality i?

John Baez

unread,
Jul 27, 2006, 7:30:49 AM7/27/06
to
As a kind of followup to "week236", here's a question:

It's easy to map the ordinal omega^2 into the real numbers
in a one-to-one and order-preserving way. Here's an artist's
conception, which uses the second dimension to make things
easier to see:

http://math.ucr.edu/home/baez/omega_squared.png

It's also easy to do this for omega^n for any finite n.
I think I can also do it for omega^omega. After that I
get tired.

Which ordinals can we do this for?

Questions slightly like this were what got Cantor interested in
ordinals in the first place - for example, he noticed that a
Fourier series must have coefficients going to zero if its
sum converges on [0,2pi] - S, where S is any set whose nth
derived set is empty. The set shown above is one whose 2nd
derived set is nonempty but whose 3rd derived set is empty.

Then he went wild and started thinking about the kth derived
set where k is any ordinal.

(The 0th derived set of S is S; the (k+1)st derived set
is the set of limit points of the kth derived set, and
when k is a limit ordinal we define the kth derived set to
be the intersection of all the jth derived sets with j < k.)

But, while some examples of ordinals embedded in R in an
order-preserving way are also nice examples of sets whose
kth derived set is nonempty for big k, I don't know any
stricter relation between my question and Cantor's.

----------------------------------------------------------------------

"In mathematics the art of proposing a question must be held of
higher value than solving it." - Georg Cantor

tc...@lsa.umich.edu

unread,
Jul 28, 2006, 10:30:15 AM7/28/06
to
In article <ea83ig$qmq$1...@news.ks.uiuc.edu>,

John Baez <ba...@math.removethis.ucr.andthis.edu> wrote:
>Logicians including Feferman and Schuette have carried out a detailed
>analysis of this subject. They know a lot about how much induction
>up to different ordinals buys you. And apparently, induction up to
>Gamma_0 lets us prove the consistency of a system called "predicative
>analysis". I don't understand this, nor do I understand the claim
>I've seen that Gamma_0 is the first ordinal that cannot be defined
>predicatively - i.e., can't be defined without reference to itself.
>Sure, saying Gamma_0 is the first solution of
>
>phi_x(0) = x
>
>is non-predicative. But what about saying that Gamma_0 is the union
>of all ordinals in the Veblen hierarchy? What's non-predicative
>about that?
>
>If anyone could explain this in simple terms, I'd be much obliged.

The situation is somewhat akin to the situation with the Church-Turing
thesis, in that one is tentatively equating an informal notion
(predicativity or computability) with a precise mathematical notion.
Therefore there is no definitive answer to your question, and Feferman
himself has articulated potential objections to the "standard view"
that Gamma_0 marks the boundary of predicativity.

Having said that, I'll also say that one of the reasons for the standard
view is that Gamma_0 marks the boundary of "autonomous progressions" of
arithmetical theories. The book by Torkel Franzen that you cited is
probably the most accessible introduction to this subject. Roughly
speaking, the idea is that if anyone fully accepts first-order Peano
arithmetic PA, then implicitly he accepts its consistency Con(PA), as
well as Con(PA+Con(PA)), etc. If one tries to articulate exactly what
is "implicitly" involved in accepting PA in this sense, then one can
make a plausibility argument that Gamma_0 is a natural stopping point.
I think you have a better shot at grasping the underlying intuition via
this approach than by staring at Gamma_0 itself and trying to figure out
what is non-predicative about its definition.
--
Tim Chow tchow-at-alum-dot-mit-dot-edu
The range of our projectiles---even ... the artillery---however great, will
never exceed four of those miles of which as many thousand separate us from
the center of the earth. ---Galileo, Dialogues Concerning Two New Sciences

Jim Heckman

unread,
Jul 28, 2006, 10:30:15 AM7/28/06
to

On 26-Jul-2006, ba...@math.removethis.ucr.andthis.edu (John Baez)
wrote in message <ea83ig$qmq$1...@news.ks.uiuc.edu>:

[...]

> But as you might have suspected, not *all* ordinals can be written
> in this way. For one thing, every ordinal we've reached so far is
> *countable*: as a set you can put it in one-to-one correspondence
> with the integers. There are much bigger *uncountable* ordinals -
> at least if you believe you can well-order uncountable sets.

? Is that last a reference to the Well-Ordering Theorem (equivalent
in ZFC to the Axiom of Choice)? Of course, you do need the WOT to
prove that /every/ set can be well-ordered, but ZF alone proves the
existence of uncountable ordinals.

[...]

--
Jim Heckman

David Madore

unread,
Jul 28, 2006, 11:30:08 AM7/28/06
to
John Baez in litteris <ead71m$etd$1...@news.ks.uiuc.edu> scripsit:

> It's easy to map the ordinal omega^2 into the real numbers
> in a one-to-one and order-preserving way. Here's an artist's
> conception, which uses the second dimension to make things
> easier to see:
>
> http://math.ucr.edu/home/baez/omega_squared.png

Thanks for calling me an artist :-) but I don't think I deserve the
title. I created that image for Wikipedia, see <URL:
http://commons.wikimedia.org/wiki/Image:Omega_squared.png > for a
larger version.

> Which ordinals can we do this for?

If you're asking which ordinals are order-isomorphic to a subset of
the real numbers, the answer is simple (at least, assuming the axiom
of choice): exactly the countable ordinals. First, any well-ordered
subset of the reals is countable because between any element of the
subset and the next ("the next" makes sense, of course, since the set
is well-ordered) there is a rational. Conversely, we can prove by
transfinite induction that any countable ordinal can be embedded in
the reals: it is true of 0, if it is true of alpha it is true of
alpha+1, and if it is true of every ordinal alpha<delta for a limit
ordinal delta, choose an increasing sequence alpha_n leading up to
delta, embed each alpha_n between (n-1)/n and n/(n+1) and put them
together... the details are left to the reader (it may be necessary
to remove some elements since we took the sum of the alpha_n rather
than the limit, but it can also be arranged so that the two coincide).

In fact, every countable ordinal can be embedded in the reals as a
closed set (then the embedding is a homeomorphism from the ordinal
with the order topology to the subset of the reals with the induced
topology), or as a discrete set, but obviously not both (except up to
omega).

[I won't bet on what happens in the absence of Choice, but it wouldn't
at all surprise me it were consistent that some large countable
ordinals can't be embedded in the real line.]

I had produced a graphical representation of epsilon_0, once, but it's
actually entirely uninteresting to look at, it's just a mess.

--
David A. Madore
(david....@ens.fr,
http://www.madore.org/~david/ )

Ian A. Mason

unread,
Jul 28, 2006, 12:00:14 PM7/28/06
to

Any finite linear ordering can be embedded in Q. Any countable model
of Th(Q) is isomorphic to Q. Hence any countable linear ordering can
be embedded into Q, by compactness and lowenheim skolem using the
above, and the diagram of the l.o. in question.

So all countable well orderings (i.e. < omega_1). Of the top of
my head (on my first coffee of the day), I'd say you'd have
problems with point whose cofinality was greater than omega,
since R is forst countable.

Kevin Buzzard

unread,
Jul 28, 2006, 12:00:15 PM7/28/06
to
John Baez <ba...@math.removethis.ucr.andthis.edu> wrote:

[snip]

> At first these numbers seem to keep getting bigger! So, it seems
> shocking at first that they eventually reach zero. For example,
> if you start with the number 4, you get this Goodstein sequence:
>
> 4, 26, 41, 60, 41, 60, 83, 109, 139, 173, 211, 253, 299, 348, ...
>
> and apparently it takes about 3 x 10^{60605351} steps to reach zero!
> You can try examples yourself on this applet:
>
> 1) National Curve Bank, Goodstein's theorem,
> http://curvebank.calstatela.edu/goodstein/goodstein.htm

[note to jb: you wrote 41, 60 twice]

Although this number 3 x 10^{60605351} is the number quoted on the
website above, I did a back of an envelope calculation which
seemed to indicate that it took about (that number)^2 steps to
reach zero. In fact the website only claims that the sequence *increases*
until the 3 x 10^{60605351}th term, but it's not hard to check
that once the sequence has stopped increasing, it starts
decreasing very soon afterwards. Did I made a slip? I think
that the (2^n*2*n-2)'th term is zero, where n=24*2^24.

Kevin

victor_me...@yahoo.co.uk

unread,
Jul 28, 2006, 2:00:08 PM7/28/06
to
John Baez wrote:
> As a kind of followup to "week236", here's a question:
>
> It's easy to map the ordinal omega^2 into the real numbers
> in a one-to-one and order-preserving way. Here's an artist's
> conception, which uses the second dimension to make things
> easier to see:
>
> http://math.ucr.edu/home/baez/omega_squared.png
>
> It's also easy to do this for omega^n for any finite n.
> I think I can also do it for omega^omega. After that I
> get tired.
>
> Which ordinals can we do this for?

As other correspondents have pointed out any
countable ordinal, indeed any countable totally ordered set,
can be (order-preserving) embedded in Q.

I give a simple proof.

Let A = {a_1, a_2, ... } be a countable totally ordered set.
We need an order-preserving injection f: A -> Q.
Clearly f is order preserving iff its restriction to
A_n = {a_1,...,a_n} is order preserving for all n. Now it
is easy to extend an order preserving map
A_n -> Q to an order preserving map
A_{n+1} -> Q, simply by mapping a_{n+1}
into the appropriate one of the n+1 intervals
defined by the endpoints f(a_1),...,f(a_n).

By recursion there is an order preserving map from
A to Q.

On the other hand there is no embdedding of
an uncountable ordinal O into R. To see this, for
each m in O take a rational between f(m) and
f(m+1). These are all distinct.

Victor Meldrew

G. A. Edgar

unread,
Jul 28, 2006, 2:00:21 PM7/28/06
to
In article <eadai0$g34$1...@news.ks.uiuc.edu>, David Madore
<david....@ens.fr> wrote:

> > Which ordinals can we do this for?
>
> If you're asking which ordinals are order-isomorphic to a subset of
> the real numbers, the answer is simple (at least, assuming the axiom
> of choice): exactly the countable ordinals.

And more generally: any countable totally ordered set can be imbedded
in the rationals.

--
G. A. Edgar http://www.math.ohio-state.edu/~edgar/

John Baez

unread,
Jul 29, 2006, 12:30:08 PM7/29/06
to
In article <eadai0$g34$1...@news.ks.uiuc.edu>,
David Madore <david....@ens.fr> wrote:

>John Baez in litteris <ead71m$etd$1...@news.ks.uiuc.edu> scripsit:

>> It's easy to map the ordinal omega^2 into the real numbers
>> in a one-to-one and order-preserving way. Here's an artist's
>> conception, which uses the second dimension to make things
>> easier to see:
>>
>> http://math.ucr.edu/home/baez/omega_squared.png

>Thanks for calling me an artist :-) but I don't think I deserve the

>title. I created that image for Wikipedia [....]

Thanks! I didn't check to see who made it. The phrase "artist's
conception" was intended as a slight joke, since in pop science
magazines one often reads things like "here is an artist's conception
of romance among australopithecines" adorning pictures that required
a lot of imagination to draw - but this time, it was actually a
mathematically precise picture!

>> Which ordinals can we do this for?

>If you're asking which ordinals are order-isomorphic to a subset of
>the real numbers, the answer is simple (at least, assuming the axiom
>of choice): exactly the countable ordinals.

Yay! Great! That's exactly what I was asking.

>I had produced a graphical representation of epsilon_0, once, but it's
>actually entirely uninteresting to look at, it's just a mess.

If you still have it around, I would be interested to see it - and maybe
even attach it to week236. I can see why it would be a mess, though.

I suppose drawing it bigger wouldn't help, but it might be fun to
take some large ordinal and draw it in your style on the scale of
this artist's conception of a hydrogen atom:

http://www.phrenopolis.com/perspective/atom/index.html

This may be the world's biggest webpage: it's 18 kilometers wide!
(That's 50 million pixels at 72 pixels per inch.)

I hadn't known my webbrowser could scroll that far. My wrist didn't
even get tired. So, it might be possible to draw omega^omega or
something and have it look interesting, even if epsilon_0 is too big.


John Baez

unread,
Jul 29, 2006, 12:30:09 PM7/29/06
to
In article <eaake1$jp6$1...@news.ks.uiuc.edu>,
<david.c...@tuebingen.mpg.de> wrote:

Maybe you meant Motzkin *trees*. In case anyone is wondering,
these are rooted planar trees where each node has one or two
daughter nodes. The set M of Motzkin trees is equipped with
an obvious isomorphism

M = 1 + M + M^2

since every Motzkin tree is either a one-node tree, a node
connected by an edge to another Motzkin tree, or a node connected
by two edges to two Motzkin trees.

Using the techniques of Schanuel, Gates, Leinster and Fiore,
the "generalized cardinality" |M| of the set of Motzkin trees
satisfies

|M| = 1 + |M| + |M|^2

so

|M| = +-i

This sort of reasoning seems completely insane at first, but it
leads to many valid and interesting results; for details see

http://math.ucr.edu/home/baez/week202.html

ANYWAY:

Jeff Morton and I put a lot of work into this idea when we were
trying to categorify the quantum harmonic oscillator. The Motzkin
trees are a categorification of the Gaussian integers; the
"+-i" hints that Galois theory is relevant. We figured out how
to categorify the algebraic integers in any algebraic extension of
the rationals, getting an "algebraic extension" of the category
of finite sets. We figured out the beginnings of a theory that
associates a "Galois 2-group" to any such algebraic extension.
I was pretty excited about this, but Jeff was eager to reach ideas
connected to physics, and this seemed like a long way around.
In particular, one needs not just algebraic numbers but also
transcendentals to make sense of the "exp(-itH)" in quantum mechanics.

So, we dropped this project and came up with a much simpler
category of "U(1)-sets" whose "cardinalities" are complex:
a U(1)-set is simply a finite sets of points ("quanta") labelled
by phases. Here we are putting the phases in "by hand" instead
of seeing them emerge from category-theoretic considerations.
This is a bit unfortunate, but the advantage is that everything
works quite quickly and smoothly, and there's a clear physical
meaning to it all.

If anyone who knows categories, combinatorics and Galois theory
wants to become a math grad student at UCR and work on the project
Jeff and I dropped, they should contact me.


John Baez

unread,
Jul 29, 2006, 12:30:09 PM7/29/06
to
In article <ead71n$etf$1...@news.ks.uiuc.edu>, <tc...@lsa.umich.edu> wrote:

>In article <ea83ig$qmq$1...@news.ks.uiuc.edu>,
>John Baez <ba...@math.removethis.ucr.andthis.edu> wrote:

>>Logicians [...] know a lot about how much induction

>>up to different ordinals buys you. And apparently, induction up to
>>Gamma_0 lets us prove the consistency of a system called "predicative
>>analysis". I don't understand this, nor do I understand the claim
>>I've seen that Gamma_0 is the first ordinal that cannot be defined
>>predicatively - i.e., can't be defined without reference to itself.
>>Sure, saying Gamma_0 is the first solution of
>>
>>phi_x(0) = x
>>
>>is non-predicative. But what about saying that Gamma_0 is the union
>>of all ordinals in the Veblen hierarchy? What's non-predicative
>>about that?

>The situation is somewhat akin to the situation with the Church-Turing


>thesis, in that one is tentatively equating an informal notion
>(predicativity or computability) with a precise mathematical notion.
>Therefore there is no definitive answer to your question, and Feferman
>himself has articulated potential objections to the "standard view"
>that Gamma_0 marks the boundary of predicativity.

There's also someone named Nik Weaver who has debated Feferman
on this subject:

http://www.cs.nyu.edu/pipermail/fom/2006-April/010472.html
http://www.math.wustl.edu/~nweaver/conceptualism.html

He seems to claim that Gamma_0 and even larger ordinals have predicative
definitions. However, I'm too ignorant to follow this debate.
Usually in physics I have a sense for when people are being reasonable
even if I don't follow the details. In this debate I can't even
do that.

>Having said that, I'll also say that one of the reasons for the standard
>view is that Gamma_0 marks the boundary of "autonomous progressions" of
>arithmetical theories. The book by Torkel Franzen that you cited is
>probably the most accessible introduction to this subject.

This summer I'm in Shanghai without any academic affiliation, so it's
hard to get that book. When I return to Riverside in the fall I'll
try to read it. But my curiosity is burning right now, so I'll take
the liberty of asking some more questions.

>Roughly
>speaking, the idea is that if anyone fully accepts first-order Peano
>arithmetic PA, then implicitly he accepts its consistency Con(PA), as
>well as Con(PA+Con(PA)), etc.

I assume that by "etcetera" you mean there's one theory like this
per ordinal. I browsed a paper by Franzen where he was trying
to explicate how these theories actually let you prove interesting
new stuff.

It's a bit mysterious: I imagine a guy sitting there thinking
"Peano arithmetic is true, so I know it's consistent, and I know
*that's* consistent too, and I know *that's* consistent...", and
so on - and after pondering this way for an transfinite amount of time,
all of a sudden he can do new stuff like prove that Goodstein
sequences approach zero!

I think Franzen was trying to dispel this naive conception.
He said the real action happens at limit ordinals, where
the interpretation of everything changes in some sneaky way.

But, my understanding of his comments like an impressionist
painting of a surreal painting - Dali's "Sacrament of the Last
Supper" as reworked by Monet.

(Hey, I managed to sneak a docahedron into the discussion!)

>If one tries to articulate exactly what
>is "implicitly" involved in accepting PA in this sense, then one can
>make a plausibility argument that Gamma_0 is a natural stopping point.

It would be really great if you could say more about this
plausibility argument.

>I think you have a better shot at grasping the underlying intuition via
>this approach than by staring at Gamma_0 itself and trying to figure out
>what is non-predicative about its definition.

Okay, I won't try to do that.


John Baez

unread,
Jul 29, 2006, 12:30:09 PM7/29/06
to
In article <ead71n$eth$1...@news.ks.uiuc.edu>,
Jim Heckman <weu_rznv...@lnubb.pbz.invalid> wrote:

>On 26-Jul-2006, ba...@math.removethis.ucr.andthis.edu (John Baez)
>wrote in message <ea83ig$qmq$1...@news.ks.uiuc.edu>:

>> But as you might have suspected, not *all* ordinals can be written


>> in this way. For one thing, every ordinal we've reached so far is
>> *countable*: as a set you can put it in one-to-one correspondence
>> with the integers. There are much bigger *uncountable* ordinals -
>> at least if you believe you can well-order uncountable sets.

>? Is that last a reference to the Well-Ordering Theorem (equivalent
>in ZFC to the Axiom of Choice)? Of course, you do need the WOT to
>prove that /every/ set can be well-ordered, but ZF alone proves the
>existence of uncountable ordinals.

That's interesting; I don't know if I ever knew that! The last
time I really studied axiomatic set theory was decades ago.

Anyway, I can easily imagine reasonable people who are comfy up
to omega or epsilon_0 (say) but don't believe you can well-order
any uncountable sets. So, I didn't want to get into a fight by
claiming bluntly that there *are* uncountable ordinals, without
any sort of caveat. I didn't want to be advocating ZFC - but now
that you bring it up, I don't even want to be advocating ZF.

But, I don't want to argue *against* them, either.

In fact, these days to get my back up you'd need to take a fairly
drastic position, like my friend Henry Flynt, who argues that
"mathematical knowledge amounts to the crystallization of officially
endorsed delusions in an intellectual quicksand":

http://www.henryflynt.org/studies_sci/mathsci.html

John Baez

unread,
Jul 29, 2006, 12:30:10 PM7/29/06
to
In article <ea83ig$qmq$1...@news.ks.uiuc.edu>,
John Baez <ba...@math.removethis.ucr.andthis.edu> wrote:

>At first these numbers seem to keep getting bigger! So, it seems
>shocking at first that they eventually reach zero. For example,
>if you start with the number 4, you get this Goodstein sequence:
>
>4, 26, 41, 60, 41, 60, 83, 109, 139, 173, 211, 253, 299, 348, ...
>
>and apparently it takes about 3 x 10^{60605351} steps to reach zero!

Kevin Buzzard pointed out a typo here. The sequence is:

4, 26, 41, 60, 83, 109, 139, 173, 211, 253, 299, 348, ...

Also, while I got the huge number above from this website:

http://curvebank.calstatela.edu/goodstein/goodstein.htm

he pointed out they actually say the sequence "can increase for
approximately 2.6 * 10^{60605351} steps", not that it reaches
zero at this point.

Kevin then worked out the details himself, and I checked his
calculations. We now seem to agree that the sequence reaches
zero at the kth term, where

k = 24 * 2^24 * 2^{24 * 2^{24}} - 2

or approximately

k = 6.9 * 10^{121210694}

Please check and see if we've done it right.

You may also enjoy trying to figure out what the folks at the
National Curve Bank meant, and whether *they* were right.

Here is Kevin's email, prettied up by me, but perhaps with some
mistakes added:

> apparently it takes about 3 x 10^{60605351} steps to reach zero!

You write this as if it were some kind of mystery. I remember working
out this number explicitly when I was a graduate student! There is
some nice form for it, as I recall. Let's see if I can reconstruct
what I did.

If I've understood the sequence correctly, it should be (where "n)"
at the beginning of a line denotes we're working in base n on this
line, so strictly speaking it's probably the n-1st term in the sequence)

2) 2^2 = 4
3) 3^3-1 = 2.3^2+2.3+2 = 26 [note: base 3, ends in 2, and 3+2=5]
4) 2.4^2+2.4+1 = 41 [note: base 4, ends in 1, and 4+1=5]
5) 2.5^2+2.5 = 60 [we're at a limit ordinal here, note 3+2=4+1=5]
6) 2.6^2+2.6-1 = 2.6^2+6+5 = 83 [note: base 6, ends in 5]
7) 2.7^2+7+4 [note: base 7, ends in 4]
8) 2.8^2+8+3 [note: base 8, ends in 3, so we next get a limit ordinal at...]
.
.
11) 2.11^2+11
12) 2.12^2+12-1 = 2.12^2+11
13) 2.13^2+10
.
.
.
23) 2.23^2 (as 23 = 12+11 = 13+10= ...)
24) 24^2+23.24+23
.
.
.
47) 47^2+23.47
48) 48^2+22.48+47
.
.
.
95) 95^2+22.95
96) 96^2+21.96+95
.
.
.

and now we spot a pattern: we're just doubling---getting a limit ordinal
at bases 24-1, 48-1, 96-1 and so on. Let's look again at those limit
ordinals:

47) 47^2+23.47
95) 95^2+22.95
.
.
.
24*2^t-1) (24*2^t-1)^2+(24-t)*(24*2^t-1)
.
.
.

so the last one with a square in it will be the case t=24, corresponding
to

r) r^2

where

r = 24 * 2^24 - 1 = 402653183.

All those 24s, but I'm sure you'll not get carried away. Let's define

n = r+1 = 24*2^24

and continue on. At the next step, the ordinal decreases sharply:

n) n^2-1 = (n-1)n + (n-1)
n+1) (n-1)(n+1) + (n-2) [note: now back to the usual tricks]
.
.
.
2n-1) (n-1)(2n-1) [the next limit, at base 2n-1]
2n) (n-2)(2n) + (2n-1)
.
.
.
4n-1) (n-2)(4n-1)
4n) (n-2)(4n)+(4n-1)
.
.
.

and the limit ordinals we're running into now (and we're going to
run into about n of them, which is a lot), are

2n-1) (n-1)(2n-1)
4n-1) (n-2)(4n-1)
8n-1) (n-3)(8n-1)
.
.
.
n2^t-1) (n-t)(n2^t - 1)
.
.
.

and finally when t = n-1

m) m

where m = n2^{n-1} - 1. The sequence now looks like

m+1) (m+1)-1 = m
m+2) m-1
m+3) m-2
.
.
.
2m+1) 0

So the sequence becomes zero at base n2^n - 1, where n = 24 * 2^24.
If 2^2 is the first term in the sequence, I guess this is
the (n2^n - 2)th term. I make this about 6.9*10^{121210694} -
curses, you got something else! Actually, I have about the square
of what you wrote and hence I have most likely made a slip. On the other
hand you can see that it's not a mystery at all, it's just an
elementary exercise. It really helps you learn about why
the countable ordinals are well-ordered too: as you continue working
out the numbers, you always have this impending sense of doom
telling you that your gut feeling that the sequence tends to
infinity might just be wrong...

Kevin


David Madore

unread,
Jul 30, 2006, 11:53:33 AM7/30/06
to
John Baez in litteris <eag2eg$bcn$1...@news.ks.uiuc.edu> scripsit:

> In article <eadai0$g34$1...@news.ks.uiuc.edu>,
> David Madore <david....@ens.fr> wrote:
>>I had produced a graphical representation of epsilon_0, once, but it's
>>actually entirely uninteresting to look at, it's just a mess.
>
> If you still have it around, I would be interested to see it - and maybe
> even attach it to week236. I can see why it would be a mess, though.

Try <URL: http://www.madore.org/~david/.tmp/eps0-2.ps.gz > (gzipped
PostScript file). And if you replace ".ps.gz" by ".c" you have the
(pretty much unreadable) C program which might have been used to
generate it... except that it doesn't seem to generate exactly the
same thing, so I don't really know (I presume some parameter was set
to a different value). As the URL indicates, these files might not
stay long, but you're welcome to do what you wish with them.

tc...@lsa.umich.edu

unread,
Jul 30, 2006, 11:52:24 AM7/30/06
to
In article <eag2eh$bcp$1...@news.ks.uiuc.edu>,

John Baez <ba...@math.removethis.ucr.andthis.edu> wrote:
>I assume that by "etcetera" you mean there's one theory like this
>per ordinal.
[...]

>>If one tries to articulate exactly what
>>is "implicitly" involved in accepting PA in this sense, then one can
>>make a plausibility argument that Gamma_0 is a natural stopping point.
>
>It would be really great if you could say more about this
>plausibility argument.

Let's look more closely at what the notion of "one theory like this
per ordinal" means. There's no difficulty figuring out what "Con(PA)"
means or how to express that statement in the first-order language
of arithmetic. Ditto with "Con(PA+Con(PA))". However, once you start
ascending the ordinal hierarchy, a difficulty appears. The language
of arithmetic doesn't let you talk about "ordinals" directly---that's a
set-theoretical concept. In order to express a statement like "Con(T)"
for some theory T, you need at minimum to be able to give some sort of
"recursive description" or "recursive axiomatization" of T (where here
I use the word "recursive" in the technical sense of recursive function
theory) in the first-order language of arithmetic. This observation
already yields the intuition that we're not going to be able to ascend
beyond the Church-Kleene ordinal, because we won't even be able to
figure out how to *say* "T is consistent" for a theory T that requires
that many iterations to reach from PA.

There are other problems, though, that potentially get in the way before
we reach the Church-Kleene ordinal. Once we realize that what we need is
a system of "ordinal notations" to "fake" the relevant set theory, we may
(if we are predicavists) worry about issues such as:

1. As we ascend the ordinal hierarchy, isn't it illegitimate to make a jump
to an ordinal alpha unless we've already proved, at the level of some
ordinal beta that we've already reached, that an ordinal of type alpha
exists?

2. And isn't it illegitimate to create sets by quantification over things
other than the natural numbers themselves and sets that we've already
created?

Condition 1 goes by the name of "autonomy" and condition 2 goes by the name
of "ramification." If one formalizes these notions in a certain plausible
manner, then one arrives at Gamma_0 as the least upper bound of theories
that you can get to, starting with (for example) PA.

One can of course wonder whether 1 and 2 above really capture the concept
of "predicativity." Some secondary evidence has accumulated of the
following form: Some argument that intuitively seems to be predicative but
that is not immediately seen to be provable in the Feferman-Schuette
framework is shown, after some work, to indeed be provable below Gamma_0.

It's still possible, of course, for someone---you mentioned Nik Weaver---to
come along and argue that our intuitive notion of predicativism, fuzzy
though it is, can't possibly be identified with the level Gamma_0. The
reason you can't seem to decide immediately whether Weaver's position is
nonsensical or not is probably because the critical questions are not
mathematical but philosophical, and of course it's usually harder to arrive
at definitive answers in philosophy than in mathematics.

david.c...@tuebingen.mpg.de

unread,
Jul 30, 2006, 12:01:47 PM7/30/06
to

> Maybe you meant Motzkin *trees*.

No, I did mean paths:

"A001006 Motzkin numbers: number of ways of drawing any number of
nonintersecting chords among n points on a circle.

Also number of Motzkin n-paths: paths from (0,0) to (n,0) in an n X n
grid using only steps U = (1,1), F = (1,0) and D = (1,-1)."

http://www.research.att.com/~njas/sequences/A001006

But of course there are loads of combinatorial interpretations,
including the trees you mention (which was actually the way I found the
link between Leinster and Fiore's construction and the Motzkin
numbers.)

> If anyone who knows categories, combinatorics and Galois theory
> wants to become a math grad student at UCR and work on the project
> Jeff and I dropped, they should contact me.

Now, that's an excellent offer.

John Baez

unread,
Aug 1, 2006, 1:11:55 AM8/1/06
to
In article <1154275307.0...@h48g2000cwc.googlegroups.com>,
<david.c...@tuebingen.mpg.de> wrote:

>> Maybe you meant Motzkin *trees*.

>No, I did mean paths...

Okay, whoops - I'd forgotten all about that!

>But of course there are loads of combinatorial interpretations,
>including the trees you mention (which was actually the way I found the
>link between Leinster and Fiore's construction and the Motzkin
>numbers.)

Sorry, I even forgot your role in this. It winds up that trees
are more useful than paths, because if we have any polynomial
fixed-point equation with natural number coefficients, like

X = 349X^7 + 3X^4 + 2

we can interpret it as defining a set of "colored trees".
To do this, we interpret the equal sign as an isomorphism.

For example, the above equation defines the set X of planar,
rooted, finite trees where the root is colored in one of 2
ways, and each node either has 4 branches coming out of it
(in which case this node is colored in one of 3 ways) or
7 branches coming out of it (in which case it's colored in
one of 349 ways).

By the work of Schanuel, Gates, Fiore and Leinster, it makes
sense to assign a cardinality to this set which is a root of
the above polynomial.

A good example is the "Golden Object" discovered by Robin Houston:

http://math.ucr.edu/home/baez/week203.html

The golden number is usually defined by the equation

G^2 = G + 1 (1)

which is not of the fixed-point form described above.
However, Houston made the substitution

G = H + 2 (2)

and notes that

H = H^2 + 4H + 1 (3)

is of the desired fixed-point form. So, H is a set of colored
trees and G is the disjoint union of H and the 2-element set.

So, we can think of G as a set of "colored trees" of a slightly
more general type. (Houston gives another interpretation.)

Anyway, using this sort of trick we might able to cook up a
category of colored trees containing the category of finite
sets, but also objects having cardinality equal to any algebraic
number! This would be a candidate for a categorified version
of the algebraic closure of Q.

Jim Heckman

unread,
Jul 31, 2006, 5:46:52 PM7/31/06
to

On 29-Jul-2006, ba...@math.removethis.ucr.andthis.edu (John Baez)
wrote in message <eag2eh$bcq$1...@news.ks.uiuc.edu>:

> In article <ead71n$eth$1...@news.ks.uiuc.edu>,
> Jim Heckman <weu_rznv...@lnubb.pbz.invalid> wrote:
>
> >On 26-Jul-2006, ba...@math.removethis.ucr.andthis.edu (John Baez)
> >wrote in message <ea83ig$qmq$1...@news.ks.uiuc.edu>:
>
> >> But as you might have suspected, not *all* ordinals can be written
> >> in this way. For one thing, every ordinal we've reached so far is
> >> *countable*: as a set you can put it in one-to-one correspondence
> >> with the integers. There are much bigger *uncountable* ordinals -
> >> at least if you believe you can well-order uncountable sets.
>
> >? Is that last a reference to the Well-Ordering Theorem (equivalent
> >in ZFC to the Axiom of Choice)? Of course, you do need the WOT to
> >prove that /every/ set can be well-ordered, but ZF alone proves the
> >existence of uncountable ordinals.
>
> That's interesting; I don't know if I ever knew that! The last
> time I really studied axiomatic set theory was decades ago.
>
> Anyway, I can easily imagine reasonable people who are comfy up
> to omega or epsilon_0 (say) but don't believe you can well-order
> any uncountable sets. So, I didn't want to get into a fight by
> claiming bluntly that there *are* uncountable ordinals, without
> any sort of caveat. I didn't want to be advocating ZFC - but now
> that you bring it up, I don't even want to be advocating ZF.
>
> But, I don't want to argue *against* them, either.

OK, but I'd be interested to know which ZF axioms your "imagine[d]
reasonable people" don't believe. Or is their problem with
mathematical logic?

[...]

--
Jim Heckman

Tom Leinster

unread,
Aug 1, 2006, 4:46:04 PM8/1/06
to
On Mon, 31 Jul 2006 21:46:52 +0000, Jim Heckman wrote:

> I'd be interested to know which ZF axioms your "imagine[d] reasonable
> people" don't believe.

For some people it's not a case of "believing" or "not believing" ZF
axioms. It's rather a matter of not believing in a single objective world
of sets.

Compare the situation with Euclidean/non-Euclidean geometry: we don't have
to declare that we "believe" or "don't believe" the parallel postulate.
(Such a declaration would only mean anything if we were referring to some
objective world, e.g. our physical universe.) You simply study whatever
geometrical system fits your purpose. Similarly, you can study whatever
set-theoretic system fits your purpose.

For example, if you're writing about combinatorics you might declare "in
this paper, all sets will be assumed finite". You might only be doing
this in order to save having to write the word "finite" over and over
again. On the other hand, the chances are you'd be doing various
operations on your finite sets (forming products, taking power-sets, etc),
and that would depend on the fact/supposition that the world of finite
sets admits such operations - obeys some of the ZF axioms, if you like.
In terms of belief, it could be said that you've temporarily suspended
your belief in the axiom of infinity. But I don't think "belief" is a
good way to look at it.

One variant of your question is: in what ways could you modify the ZF
axioms and still reasonably call them axioms for "sets"? Obviously this
is a fuzzy question, but it's not so fuzzy as to be meaningless. E.g. I
suppose most people would agree that if you add the Axiom of Choice then
you could still reasonably say that the result (i.e. ZFC) is a system of
axioms for some kind of set theory, but no one would agree that if you
threw out all the ZF axioms and replaced them with axioms for the complex
numbers then those could be called axioms for a set theory.

Tom


John Baez

unread,
Aug 1, 2006, 12:01:15 AM8/1/06
to
In article <eaiklt$173$1.re...@nef.ens.fr>,
David Madore <david....@ens.fr> wrote:

>John Baez in litteris <eag2eg$bcn$1...@news.ks.uiuc.edu> scripsit:

>> David Madore <david....@ens.fr> wrote:

>>>I had produced a graphical representation of epsilon_0, once, but it's
>>>actually entirely uninteresting to look at, it's just a mess.

>> If you still have it around, I would be interested to see it - and maybe
>> even attach it to week236.
>

>Try <URL: http://www.madore.org/~david/.tmp/eps0-2.ps.gz > (gzipped
>PostScript file). And if you replace ".ps.gz" by ".c" you have the
>(pretty much unreadable) C program which might have been used to
>generate it... except that it doesn't seem to generate exactly the
>same thing, so I don't really know (I presume some parameter was set
>to a different value). As the URL indicates, these files might not
>stay long, but you're welcome to do what you wish with them.

Thanks. Could it be that this PostScript file shows a picture,
not of epsilon_0, but of omega^omega?

There's a countable sequence of really big lines. The first one
is clearly 0. The second is clearly omega. It looks to me like
the third could be omega^2... and so on.

Hmm, but maybe the third really big line is omega^omega.
Clearly if you're drawing epsilon_0 we should see big lines for omega,
omega^omega, omega^omega^omega and so on.

Between the second and third really big lines I see a countable
sequence of "pretty big" lines. I thought these were omega 2,
omega 3, and so on... leading up to omega^2. But now, looking more
carefully, I see some fine structure which suggests they could be
higher ordinals, perhaps leading up to omega^omega.

ANYWAY:

If any hacker out there creates nice pictures of omega^omega
and/or epsilon_0, I'll put them on my website. If you do
both and I think they're really nice, I'll also give you a
signed copy of the new (corrected) version of "Gauge Fields,
Knots and Gravity", as soon I can buy it from World Scientific
(I got my copy a while ago, so it should be coming out soon.)
Or, if you prefer, some other book of comparable price.

Fine print: if a bunch of people attempt this, I'll give prizes
for the one or two that look the best. To be cool, the picture
should be in David Madore's style:

http://math.ucr.edu/home/baez/omega_squared.png
http://www.madore.org/~david/.tmp/eps0-2.ps.gz

unless you can think of something better. The main problem
with the second picture above is that the fine structure gets
too small too fast, so it's hard to see what's going on.

To be *really* cool, the picture will be an impressively tall
or wide webpage, along these lines:

http://www.phrenopolis.com/perspective/atom/index.html

The world's largest webpage deserves to be a picture of infinity,
not a mere hydrogen atom. Why should physicists have all the fun?

John Baez

unread,
Aug 2, 2006, 2:42:30 AM8/2/06
to
In article <12csui1...@corp.supernews.com>,
Jim Heckman <weu_rznv...@lnubb.pbz.invalid> wrote:

>OK, but I'd be interested to know which ZF axioms your "imagine[d]
>reasonable people" don't believe. Or is their problem with
>mathematical logic?

I can imagine all sorts of reasonable people who believe all sorts
of things. And, I even know some of them.

For example, I can imagine various sorts of reasonable constructivists:

http://en.wikipedia.org/wiki/Constructivism

and my former student Toby Bartels (who just got his PhD) is one.
Most such people don't believe in the law of excluded middle, so
ZF is right out. And, I believe most of them don't believe you
can well-order uncountable sets, because I've never heard of any
way to "construct" a well-ordered uncountable set, in the technical
sense of "construct".

I can also imagine various sorts of reasonable finitists:

http://en.wikipedia.org/wiki/Finitism

I can also imagine various sorts of reasonable ultrafinitists:

http://en.wikipedia.org/wiki/Ultrafinitism

meaning people who don't believe in unbelievably large finite numbers.
Unfortunately, it seems hard to develop good axioms formalizing this
view, perhaps because the normal concept of proof allows arbitrarily
long proofs. I know Christer Hennix and Alexander Esenin-Volpin have
tried, but I don't know how far they've gotten. Edward Nelson hasn't
worked much on ultrafinitism, but he has expressed sympathetic views in
his book "Predicative Arithmetic". In his article "Mathematics and Faith":

http://www.math.princeton.edu/~nelson/papers/faith.pdf

he writes:

I must relate how I lost my faith in Pythagorean numbers. One
morning at the 1976 Summer Meeting of the American Mathematical
Society in Toronto, I woke early. As I lay meditating about numbers,
I felt the momentary overwhelming presence of one who convicted me
of arrogance for my belief in the real existence of an infinite
world of numbers, leaving me like an infant in a crib reduced to
counting on my fingers. Now I believe in a world where there are no
numbers save that human beings on occasion construct.

Personally I don't advocate any of these positions, and like Tom
Leinster I am happy that you can do mathematics without "believing
in" any specific axiom system.

Personal stuff:

Edward Nelson is a mathematical physicist at Princeton who like me
was a student of Irving Segal. I never discussed logic with him,
though he read and critiqued my senior thesis when I was an undergrad,
and this thesis was on applications of recursive function theory to
quantum mechanics.

I used to argue heatedly with Christer Hennix, because he regarded
all mathematics using infinity as a sham. I should have spent my
time asking him how Esenin-Volpin's alternative system was supposed
to work. But our discussions weren't a total waste, because I met
my wife through a friend of his - Henry Flynt:

http://www.henryflynt.org/

known as the inventor of "concept art", musician, and cognitive
nihilist. I'm not sure Henry Flynt would want to be characterized
as a "reasonable person".

I only met Esenin-Volpin a couple of times. Besides being the son of
the famous Russian poet Sergey Yesenin and the main proponent of
ultra-intuitionism, he is known for being a topologist, a dissident
during the Soviet era, and a political prisoner who spent a total
of 14 years in jail and was exiled to Kazakhstan for 5. His
imprisonments were supposedly for psychiatric reasons, but Vladimir
Bukovsky has been quoted as saying that Volpin's diagnosis was
"pathological honesty":

http://en.wikipedia.org/wiki/Alexander_Esenin-Volpin

to...@mantis.co.uk

unread,
Aug 2, 2006, 4:44:35 AM8/2/06
to
John Baez wrote:

> http://www.phrenopolis.com/perspective/atom/index.html
>
> The world's largest webpage deserves to be a picture of infinity,
> not a mere hydrogen atom. Why should physicists have all the fun?

Wait a minute... that's not a proton - it's Neptune!

--
Tony Lezard

Alec Edgington

unread,
Aug 2, 2006, 3:54:18 PM8/2/06
to
John Baez wrote:
> If any hacker out there creates nice pictures of omega^omega
> and/or epsilon_0, I'll put them on my website. If you do
> both and I think they're really nice, I'll also give you a
> signed copy of the new (corrected) version of "Gauge Fields,
> Knots and Gravity", as soon I can buy it from World Scientific
> (I got my copy a while ago, so it should be coming out soon.)
> Or, if you prefer, some other book of comparable price.

Just an idea for how it might be done with omega^omega:

First, here's a nice 1-1 order-preserving map f from omega^omega onto a
subset S of the dyadic rationals: map 0 to 0, and given the ordinal

x = a_n*omega^n + a_(n-1)*omega^(n-1) + ... + a_0

where a_i < omega and a_n>0,

write down a (binary) point, followed by
n 1s, followed by a 0, followed by
(a_n)-1 1s, followed by a 0, followed by
a_(n-1) 1s, followed by a 0, followed by
a_(n-2) 1s, followed by a 0, followed by
...
a_0 1s.

Call this number f(x).

Then, find some way to represent the 'visibility' of x (the heights of
the lines): for example,

v(x) = c^n + k*c^a_n + k^2*c^a_(n-1) + k^3*c^a_(n-2) + ...

where 0<k,c<1 (there may be prettier ways to do it).

Then, plot v(x) against f(x), where x ranges over some subset of
omega^omega constructed so that it includes all x with v(x) > delta > 0
(this is where the hacking comes in and I give up!)

Could binary expansions be used in a similar (but more complex) way to
represent epsilon_0?

John Baez

unread,
Aug 2, 2006, 11:13:22 PM8/2/06
to
In article <eadjb8$j12$1...@news.ks.uiuc.edu>,
<victor_me...@yahoo.co.uk> wrote:

>As other correspondents have pointed out any
>countable ordinal, indeed any countable totally ordered set,
>can be (order-preserving) embedded in Q.

Thanks, everyone!

>I give a simple proof.
>
>Let A = {a_1, a_2, ... } be a countable totally ordered set.
>We need an order-preserving injection f: A -> Q.
>Clearly f is order preserving iff its restriction to
>A_n = {a_1,...,a_n} is order preserving for all n. Now it
>is easy to extend an order preserving map
>A_n -> Q to an order preserving map
>A_{n+1} -> Q, simply by mapping a_{n+1}
>into the appropriate one of the n+1 intervals
>defined by the endpoints f(a_1),...,f(a_n).
>
>By recursion there is an order preserving map from
>A to Q.

Okay, that's simple. Nice!

I'd now like to check something I wildly claimed in a
discussion with Jeffrey Winkler, namely that every
ordinal below the Church-Turing ordinal can be embedded
in an order-preserving way as a *recursive* subset of Q.

(If someone knows about recursive subsets of the natural numbers,
they can take some simple formula for enumerating the rationals,
and use this to define recursive subsets of the rational numbers.)

I believe that for every countably infinite ordinal X below
the Church-Turing ordinal, there's a enumeration of X such
that the order relation on X becomes a recursive relation on X.
Is this right?

The procedure Victor Meldrew describes can then be made into
an *algorithm* for getting an order-preserving injection f: X -> Q.

This already means X is a recursively enumerable subset of Q.
How can we be sure it's recursive?

We do something sneaky like this: we make our algorithm choose
a_n so that in least terms, its denominator is 2^n. Then there's
an easy algorithm to see if some fraction is in f(X). First we
see if its denominator is 2^n. If it's not, the number ain't in f(X).
If it is, we compute a_n and see if it's our number.
If it is, our number is in f(X). If it aint', it ain't.

I guess I'm mainly worried about the part where I say "Is this right?"

I'm guessing that since the Church-Turing ordinal is the sup of all
"ordinals that can be described in computable ordinal notations",
it's right.

........................................................................

Jeffery Winkler wrote:
> Are the ridiculously infinite forms of infinity you discuss in your
> article ever used for anything?

Without epsilon_0 you can't prove that Goodstein sequences converge
to zero - an obviously true fact. As I mentioned, the main use of
these ordinals is to measure the strength of axiom systems. But,
I didn't write about these ordinals because they're useful. I wrote
about them because they're fun.

They're not "ridiculously infinite", though. The ordinals I mentioned
are all countable ordered sets, and you can describe them all
*explicitly* as subsets of the rational numbers.

More precisely: any one of the ordinals I mentioned, up to and including
the Feferman-Schuette ordinal (and quite a ways beyond), is isomorphic
as an ordered set to a subset of the rational numbers. Moreover, you
can write a computer program that will decide whether or not any given
fraction is in this subset. As a consequence, you can also write a
computer program that lists the fractions in this set.

It's pretty obvious how to do this for omega^2:

http://math.ucr.edu/home/baez/omega_squared.png

But you can do it for any one of the ordinals I mentioned! David
Madore has drawn a picture of epsilon_0, for example.

So, if you really object to these ordinals as "ridiculously infinite",
you must have some doubts about the legitimacy of computable subsets
of the rational numbers as valid objects of study. That seems like
an extreme position.

The Church-Kleene ordinal is much larger than any of the ordinals
I discussed in detail. It's still countable. Any ordinal below it
can be described in a computable way - but it itself can't. So, if
you believe that only computable mathematical entities are worth
studying, you might want to stop shy of this one. I stopped *far*
short of this one.

> If they only refer to themselves, if the only thing these infinite
> sets refer to is other infinite sets then it's pointless self-reflexive
> recursive circular reasoning.

I hope you see that this is not true for the countable ordinals I
was discussing. I deliberately refrained from mentioning the large
cardinals that logicians often discuss, precisely because I share
your distaste for such stuff.

I realized after I posted "week236" that some people might think I was
talking about mystical entities, when I was actually talking about very
concrete things.

Best,
jb

John Baez

unread,
Aug 2, 2006, 11:21:02 PM8/2/06
to
Someday I'd like to go on and describe systems of notation for
ordinals above the Feferman-Schuette ordinal. But I might not
get around to it, so here's some stuff for anyone interested -
including possibly my future self.

This paper introduced the "Schuette Klammersymbole", which
generalize the Veblen hierarchy:

27) Kurt Schuette, Kennzeichnung von Orgnungszahlen durch rekursiv
erklaerte Funktionen, Math. Ann 127 (1954), 15-32.

These papers discuss a general concept of "ordinal notation system",
which includes the Schuette Klammersymbole and also something
called the "n-ary Veblen hierarchy":

28) Anton Setzer, An introduction to well-ordering proofs in Martin-
Loef's type theory, in Twenty-Five Years of Constructive Type Theory,
eds. G. Sambin and J. Smith, Clarendon Press, Oxford, 1998, pp. 245-263.
Also available at http://www.cs.swan.ac.uk/~csetzer/index.html

Anton Setzer, Ordinal systems, in Sets and Proofs, Cambridge U. Press,
Cambridge, 2011, pp. 301-331. Also available at
http://www.cs.swan.ac.uk/~csetzer/index.html

This paper has a nice expository section on generalizations of the
Veblen hierarchy:

29) Jean H. Gallier, What's so special about Kruskal's theorem and
the ordinal Gamma_0? A survey of some results in proof theory,
sec. 7: A glimpse at Veblen hierarchies, Ann. Pure Appl. Logic 53
(1991), 199-260. Also available at
http://www.cis.upenn.edu/~jean/gallier-old-pubs.html

This paper is very useful, since it compares different notations:

30) Larry W. Miller, Normal functions and constructive ordinal notations,
J. Symb. Log. 41 (1976), 439-459.

You can get it through JSTOR if you have access to that.

This webpage gives a nice definition of "ordinal notation system"
as a coalgebra of a certain functor - nice if you understand categories,
that is:

31) Peter Hancock, Ordinal notation systems,
http://homepages.inf.ed.ac.uk/v1phanc1/ordinal-notations.html

Finally, the Wikipedia article on "large countable ordinals" has
some references to books which are, alas, out of print.


Dave L. Renfro

unread,
Aug 3, 2006, 10:43:18 AM8/3/06
to
John Baez wrote:

[comments about pictures for w^w and epsilon_0]

Those interested in this subthread may find the following
web pages of interest. In particular, see "Skolem's
problem" below.

"How to Count Up to the First Epsilon" by Libor Behounek
http://web.ff.cuni.cz/~behounek/eps55.htm
http://www.volny.cz/behounek/logic/papers/ordcalc/eps55.html

http://tinyurl.com/fqz7b

Internet archive version of first URL above, in view
of the comment "This site is no longer maintained".

"Ordinal Calculator" by Libor Behounek
http://www.volny.cz/behounek/logic/papers/ordcalc/index.html

Let W be the set of all (formal) ordinal numbers that
can be written down with symbols 0, 1, 2, ..., omega(),
epsilon(), +, -, *, ^. There exists an algorithmical
formal manipulation which calculates addition, subtraction,
multiplication, ordinal power and several other ordinal
functions (e.g., cofinality and cardinality) of elements
of W in the Cantor normal form. The algorithms can effectively
be implemented as a computer program.

"On an idea of Peter Dybjer's" by Peter Hancock
http://www.dcs.ed.ac.uk/home/pgh/dybjer.html

Hancock lists some "watershed" ordinals and discusses
"Various Veblen hierarchies".

"Skolem's problem" by Peter Hancock
http://www.dcs.ed.ac.uk/home/pgh/skolem.html

http://scholar.google.com/scholar?q=Skolem's-problem+ordinal
http://www.google.com/search?q=Skolem-problem+ordinal

Hilbert Levitz, "Transfinite ordinals and their notations: For
the uninitiated", Version 1.1, 8 pages. A 220 K .ps file is at
http://www.cs.fsu.edu/~levitz/research.html

Dave L. Renfro, 20 July 2006 sci.math post.
http://groups.google.com/group/sci.math/msg/e92979b4f24d9789

As you get farther out along the ordinals, the intervals
where the ordinals are ahead of the cardinals become
blips that can be ignored.

Justin T. Miller, "On the Independence of Goodstein's Theorem",
Undergraduate Thesis, University of Arizona, 30 April 2001.
http://www.u.arizona.edu/~miller/thesis/node1.html

Dave L. Renfro

tc...@lsa.umich.edu

unread,
Aug 3, 2006, 12:43:37 PM8/3/06
to
In article <earpki$1il$1...@glue.ucr.edu>, John Baez <ba...@math.ucr.edu> wrote:
> > If they only refer to themselves, if the only thing these infinite
> > sets refer to is other infinite sets then it's pointless self-reflexive
> > recursive circular reasoning.
>
> I hope you see that this is not true for the countable ordinals I
> was discussing. I deliberately refrained from mentioning the large
> cardinals that logicians often discuss, precisely because I share
> your distaste for such stuff.

One of Harvey Friedman's ongoing projects is to demonstrate that large
cardinals are needed to prove "ordinary" statements in mathematics.
For example, he has found a statement of the following general form---

for every directed graph G of a certain kind, there exists an
independent subset S of G such that given any induced subgraph H
of G in the complement of S, there exists another induced subgraph H'
in the complement of S that shares certain vertices of H and avoids
a certain special vertex of G

---with the property that it is not provable in ZFC but *is* provable
if you assume the existence of a certain type of large cardinal. (More
precisely, the above statement is equivalent to the 1-consistency of a
certain large cardinal.) It is quite remarkable that such an innocuous
combinatorial statement, which makes no direct reference to large cardinals
or concepts in logic (such as consistency or provability), requires large
cardinals to prove.

If Friedman's program works out the way he hopes it will---and so far there
appears to be no fundamental obstacle in the way---then large cardinal
axioms could come to play roughly the same role in mathematics that
statements such as the Riemann hypothesis and P != NP currently play;
that is, they will be tacked on as extra hypotheses whenever they are
needed to prove some interesting theorem. (The main difference is that
we still hope to be able to "settle" the Riemann hypothesis and P != NP
with a proof that is formalizable in very weak logical systems, whereas
we will not be able to "settle" the large cardinal hypotheses in quite
the same way.) In that case, one may be compelled to deal with large
cardinals, distaste or no distaste.

Jack Campin - bogus address

unread,
Aug 4, 2006, 7:21:28 PM8/4/06
to
> large cardinal axioms could come to play roughly the same role in
> mathematics that statements such as the Riemann hypothesis and
> P != NP currently play; that is, they will be tacked on as extra
> hypotheses whenever they are needed to prove some interesting
> theorem. [...] In that case, one may be compelled to deal with

> large cardinals, distaste or no distaste.

What kinds of distaste are imaginable?

Okay, an axiom strong enough to induce inconsistency is clearly
out, but short of that, what sort of heuristic/dialectic reasoning
would you use to argue that a certain large cardinal axiom was
unreasonable?

(Does Maddy take that on? Can't remember if I've read her papers).

Could P != NP itself be something that a large cardinal axiom might
settle?

============== j-c ====== @ ====== purr . demon . co . uk ==============
Jack Campin: 11 Third St, Newtongrange EH22 4PU, Scotland | tel 0131 660 4760
<http://www.purr.demon.co.uk/jack/> for CD-ROMs and free | fax 0870 0554 975
stuff: Scottish music, food intolerance, & Mac logic fonts | mob 07800 739 557

Keith Ramsay

unread,
Aug 5, 2006, 4:59:21 AM8/5/06
to
John Baez wrote:
|I believe that for every countably infinite ordinal X below
|the Church-Turing ordinal, there's a enumeration of X such
|that the order relation on X becomes a recursive relation on X.
|Is this right?

I assume the Church-Turing ordinal is the same as the one
often known as the Church-Kleene ordinal.

Yes. It can be made primitive recursive even. There's a result
of Spector extending that to hyperarithmetically definable
well-orderings. Here's a generalization:

http://www.math.uchicago.edu/~antonio/papers/equimorphismspdf.pdf

If you just want to convince yourself that recursively enumerable
can be reduced to primitive recursive, consider the fact that if A
is a recursively enumerable subset of the natural numbers, then
it's in one-to-one correspondence with a primitive recursive set B
of pairs <i, j> of natural numbers, where <i, j> is in B if i is
generated on step j of a computation enumerating A. It takes
some more tinkering to ensure that the ordering is also primitive
recursive, of course, but this ability to postpone indefinitely dealing
with the an element of a recursively enumerable set is the main
trick you need.

If we have a primitive recursive well-ordering on a primitive recursive
subset of the natural numbers, we can extend it to all the natural
numbers by just taking the other natural numbers, outside of the
set, to be less than the ones inside of the set, and ordered by the
usual ordering of the natural numbers. This at worst defines a
larger ordinal than the original definition does.

Keith Ramsay

tc...@lsa.umich.edu

unread,
Aug 5, 2006, 3:17:35 PM8/5/06
to
In article <bogus-5C1C14....@news.news.demon.net>,

Jack Campin - bogus address <bo...@purr.demon.co.uk> wrote:
>Okay, an axiom strong enough to induce inconsistency is clearly
>out, but short of that, what sort of heuristic/dialectic reasoning
>would you use to argue that a certain large cardinal axiom was
>unreasonable?

I'll leave this question for those who personally dislike large cardinal
axioms to answer.

>(Does Maddy take that on? Can't remember if I've read her papers).

She certainly discusses why set theorists don't like V = L, but I'm not
aware of any extended discussion by her on large cardinals per se. But
I don't follow the literature closely.

>Could P != NP itself be something that a large cardinal axiom might settle?

We can't rule out that possibility at present. But there's also no evidence
for it. About the only interesting undecidability result for P != NP is
that it can't be proved in a certain very weak system of bounded arithmetic,
unless a certain widely believed cryptographic conjecture is false. This is
a million miles from undecidability in ZFC.

Aatu Koskensilta

unread,
Aug 5, 2006, 10:09:28 AM8/5/06
to
Jack Campin - bogus address wrote:
> Okay, an axiom strong enough to induce inconsistency is clearly
> out, but short of that, what sort of heuristic/dialectic reasoning
> would you use to argue that a certain large cardinal axiom was
> unreasonable?

Well, no one knows whether Rheinhardt's ultimate large cardinal axiom

There is a non-trivial elementary embedding j: V --> V of the
universe in itself.

is inconsistent with choice (it is inconsistent with ZFC as famously
proved by Kunen).

Also, if a large cardinal axiom was introduced that broke the nice
linear ordering of large cardinals w.r.t. consistency strength something
would clearly be amiss. It's possible that at some future time we have
two incompatible axioms which nevertheless can be forced over each
other. It would then be unclear in what sense one is correct and the
other incorrect. Happily, such is not the case today and it seems that
there is a coherent structure to the hierarchy of large cardinals.

> (Does Maddy take that on? Can't remember if I've read her papers).

Maddy's papers Believing the axioms I & II and also her paper on V=L
contain a wealth of information and insightful discussion about these
issues.

> Could P != NP itself be something that a large cardinal axiom might
> settle?

Large cardinals have all sorts of arithmetical consequences, and in some
entirely theoretical sense it is possible that to settle the P = NP
problem a large cardinal is needed.

--
Aatu Koskensilta (aatu.kos...@xortec.fi)

"Wovon man nicht sprechen kann, daruber muss man schweigen"
- Ludwig Wittgenstein, Tractatus Logico-Philosophicus

Aatu Koskensilta

unread,
Aug 6, 2006, 2:10:33 PM8/6/06
to
Aatu Koskensilta wrote:
> Well, no one knows whether Rheinhardt's ultimate large cardinal axiom
>
> There is a non-trivial elementary embedding j: V --> V of the
> universe in itself.
>
> is inconsistent with choice (it is inconsistent with ZFC as famously
> proved by Kunen).

That should be "without choice", of course.

John Baez

unread,
Aug 6, 2006, 9:09:31 PM8/6/06
to
In article <1154768360.8...@m79g2000cwm.googlegroups.com>,
Keith Ramsay <kra...@aol.com> wrote:

>John Baez wrote:

>|I believe that for every countably infinite ordinal X below
>|the Church-Turing ordinal, there's a enumeration of X such
>|that the order relation on X becomes a recursive relation on X.
>|Is this right?

>I assume the Church-Turing ordinal is the same as the one
>often known as the Church-Kleene ordinal.

Sorry! I seem to have a Pavlovian tendency to write "Turing"
when I see "Church-". I've made that mistake several times.
I see now it led me into learning about Turing's thesis work
with Church on "Systems of logic defined by ordinals".
When I heard about this, I assumed it dealt with the
"Church-Turing ordinal". I should have been saying
"Church-Kleene ordinal" all along. I'll fix "week236".

>Yes. It can be made primitive recursive even.

Thanks!

Keith Ramsay

unread,
Aug 7, 2006, 1:07:17 AM8/7/06
to
Jack Campin wrote:
|Okay, an axiom strong enough to induce inconsistency is clearly
|out, but short of that, what sort of heuristic/dialectic reasoning
|would you use to argue that a certain large cardinal axiom was
|unreasonable?
|
|(Does Maddy take that on? Can't remember if I've read her papers).

She models her exposition to some extent on the point of
view of a certain kind of set theorist, who has little objection
to large cardinal axioms in general. One of the criteria she
describes for accepting an axiom is called "one step back from
disaster", and you can probably guess what it means in this
case: take the large cardinal axiom that falls just short of
producing a contradiction. Another criterion is called "maximize",
and it says that everything else being equal, we want for our
set theoretical universe to be as large as can be.

To take a simple case, imagine a finitist and a non-finitist are
discussing the merits of the axiom of infinity. The non-finitist might
make an argument based on "maximize", saying that the finitist's
set theoretical universe appears to be simply the hereditarily
finite part of the universe. So rather than talk only about that
part, let's instead assume the universe has an infinite set in it
so long as there's no obvious problem with it. Likewise, large
cardinal axioms often have the property that a universe in which
they fail is much like a universe in which they are true, but
chopped off at the rank of the first example of the large cardinal.

So all in all, the point of view described in those papers favors
relatively free adoption of large cardinal axioms, so long as
there's no tangible reason not to.

Keith Ramsay

Bill Taylor

unread,
Aug 7, 2006, 2:23:45 AM8/7/06
to
Aatu Koskensilta wrote:

> Large cardinals have all sorts of arithmetical consequences, and in some
> entirely theoretical sense it is possible that to settle the P = NP
> problem a large cardinal is needed.

Can someone just remind me please... I think it's been stated here
before that P = NP can be reworded in the language of PA so that
it has only one unbounded quantifier... I forget which type.

So that if it turns out to be undecidable in PA, it is automatically
true (or false, depending on which quantifier).
OC, this may have no bearing on Aatu's comment above.

--------------------------------------------------------------------
Bill Taylor W.Ta...@math.canterbury.ac.nz
--------------------------------------------------------------------
Q: What is Iraq's national bird?
A: Duck!
-------------------------------------------------------------------

Keith Ramsay

unread,
Aug 7, 2006, 2:25:29 AM8/7/06
to
tc...@lsa.umich.edu wrote:
|It's still possible, of course, for someone---you mentioned Nik
Weaver---to
|come along and argue that our intuitive notion of predicativism, fuzzy
|though it is, can't possibly be identified with the level Gamma_0.
The
|reason you can't seem to decide immediately whether Weaver's position
is
|nonsensical or not is probably because the critical questions are not
|mathematical but philosophical, and of course it's usually harder to
arrive
|at definitive answers in philosophy than in mathematics.

My impression is that a lot of it has to do with Weaver's being
interested
in predicativism as a possible working philosophy, which is not quite
the
same as being interested in the concept of predicativity in the
abstract.

There's a basic problem with relating a philosophy to formal systems,
due to Goedel incompleteness. If we can exhibit a formal system S that
allegedly captures all the reasoning acceptable to a given philosophy
of mathematics, then it's a little hard to see how such a claim can be
supported. If someone holding that philosophy can see that this is so,
then how can they fail to see that S is consistent, which is unprovable
in S? And if the formal system S captures all the reasoning they accept
without their knowing it, how can this be established by anyone else?

Consider this for the prototypical "believer in ZFC". Proofs from ZFC
tend to be accepted as proofs without the author feeling the need to
include any of the axioms among the hypotheses, whereas proofs from
large cardinal axioms and the like tend to be stated with the
additional assumptions listed as explicit hypotheses. It's not very
credible to say, however, that provability in ZFC represents the exact
line between what can be accepted as proven and what can't, since,
in particular, if one manages to prove that a theorem follows from the
consistency of ZFC, this is as good a reason to believe it as its
provability from ZFC.

Weaver and Feferman regard this situation in different lights in the
case of predicativism. (And of course they differ on other issues.)

>From my point of view, we have to concede to the objection but only
so far. I don't think we can reasonably ask a predicativist to agree
that only the things that can be proven predicatively, in the sense of
the famous analysis of predicativity, can be validly proven. That would
be akin to asking the "ZFC believer" to agree not to believe that ZFC
is consistent. On the other hand, however, I'm unpersuaded by any
of the arguments that I've read that the predicativist is entitled to
go "very far beyond" the formal sense of predicativity, where I'm
just going to leave "very far beyond" as a vague idea here. I think
we can concede that some amount of reflection upon the concept
of predicative proof, taking the predicativist outside of the strict
formal limits on it, is compatible with the overall philosophy, without
having to concede that anything very "strongly impredicative" has
been justified in a way compatible with their philosophy.

I don't think it's so hard to see that the way one ordinarily proves
induction up to Gamma_0 is impredicative. It's not that it's impossible
to define it predicatively. Each computable ordinal can be defined
as an ordering on natural numbers, given by a primitive recursive
relation on them. The existence of this ordering isn't the problem.
The problem is proving induction up to it. The way that one ordinarily
does it makes reference to sets of ordinals. That's the gist of it. To
show that this is not a merely apparent obstacle to a predicative
proof is a longer story.

Keith Ramsay

Aatu Koskensilta

unread,
Aug 7, 2006, 3:30:30 AM8/7/06
to
Bill Taylor wrote:
> Aatu Koskensilta wrote:
>
>> Large cardinals have all sorts of arithmetical consequences, and in some
>> entirely theoretical sense it is possible that to settle the P = NP
>> problem a large cardinal is needed.
>
> Can someone just remind me please... I think it's been stated here
> before that P = NP can be reworded in the language of PA so that
> it has only one unbounded quantifier... I forget which type.

P = NP is Pi_2, so no - barring some development in complexity theory.

Daryl McCullough

unread,
Aug 8, 2006, 8:52:02 AM8/8/06
to
Bill Taylor says...

>Can someone just remind me please... I think it's been stated here
>before that P = NP can be reworded in the language of PA so that
>it has only one unbounded quantifier... I forget which type.
>
>So that if it turns out to be undecidable in PA, it is automatically
>true (or false, depending on which quantifier).

I would think that it would be a universal quantifier. To prove
P = NP false, all you have to do is to find one problem and show
that

(1) It can be solved using a nondeterministic polynomial-time algorithm.
(2) It cannot be solved using a deterministic polynomial-time algorithm.

--
Daryl McCullough
Ithaca, NY

John Baez

unread,
Aug 8, 2006, 4:09:51 AM8/8/06
to
In article <1154931929....@n13g2000cwa.googlegroups.com>,
Keith Ramsay <kra...@aol.com> wrote:

>I don't think it's so hard to see that the way one ordinarily proves
>induction up to Gamma_0 is impredicative. It's not that it's impossible
>to define it predicatively. Each computable ordinal can be defined
>as an ordering on natural numbers, given by a primitive recursive
>relation on them. The existence of this ordering isn't the problem.
>The problem is proving induction up to it. The way that one ordinarily
>does it makes reference to sets of ordinals. That's the gist of it. To
>show that this is not a merely apparent obstacle to a predicative
>proof is a longer story.

I'd love to hear a bit more of the story, especially if you can tell
it in a charming and not too rigorous manner. In particular, nothing
in the paragraph says what's special about Gamma_0. For example,
suppose I have an ordinal smaller than Gamma_0. How can I give a
"predicative" proof of induction up to that ordinal? What breaks
down at Gamma_0?

Patricia Shanahan

unread,
Aug 8, 2006, 11:07:58 AM8/8/06
to

Proving that a problem cannot be solved by a deterministic
polynomial-time algorithms is very difficult. Performance lower bounds
are often difficult to prove because there might be some completely
unanticipated approach.

There is a large set of known NP-complete problems, problems in NP
such that if any one of them is in then P=NP. To prove P=NP all we need
is a single provably deterministic polynomial time algorithm for a
single NP-complete problem.

I suspect that if the P=NP question ever gets resolved, it will be by
exhibition of a deterministic polynomial time algorithm for an
NP-complete problem, and it will be resolved in the P=NP direction.

However, it is quite likely that P is not equal to NP, but it will never
be proved. There will just be a continuing accumulation of NP-complete
problems for which there is neither a known deterministic polynomial
time algorithm nor a proof that none exists.

Patricia

Daryl McCullough

unread,
Aug 8, 2006, 11:32:09 AM8/8/06
to
Patricia Shanahan says...
>
>Daryl McCullough wrote:

>> I would think that it would be a universal quantifier. To prove
>> P = NP false, all you have to do is to find one problem and show
>> that
>>
>> (1) It can be solved using a nondeterministic polynomial-time algorithm.
>> (2) It cannot be solved using a deterministic polynomial-time algorithm.
>
>Proving that a problem cannot be solved by a deterministic

>polynomial-time algorithms is very difficult...

>There is a large set of known NP-complete problems, problems in NP
>such that if any one of them is in then P=NP. To prove P=NP all we need
>is a single provably deterministic polynomial time algorithm for a
>single NP-complete problem.

Okay, so I had it exactly backwards. P=NP can be cast as an existential
statement (so, as Bill says, if it is not provable, then it is not true).

tc...@lsa.umich.edu

unread,
Aug 8, 2006, 2:38:31 PM8/8/06
to
In article <ebaap...@drn.newsguy.com>,

Daryl McCullough <stevend...@yahoo.com> wrote:
>Okay, so I had it exactly backwards. P=NP can be cast as an existential
>statement (so, as Bill says, if it is not provable, then it is not true).

It's not quite so simple. If P = NP, then there is some Turing machine M
and some polynomial p such that M correctly solves any instance of the
satisfiability problem of size n in time p(n). However, even if you can
exhibit M and p, it does not follow that a *proof* that M and p behave
as advertised can be carried out on the basis of PA. Ostensibly, checking
this fact requires checking infinitely many instances---unless you can
figure out a way to reduce this verification to a *finite* computation
(note that doing so would be a major advance in complexity theory).

tc...@lsa.umich.edu

unread,
Aug 8, 2006, 2:59:00 PM8/8/06
to
In article <eb9gsf$l08$1...@glue.ucr.edu>, John Baez <ba...@math.ucr.edu> wrote:
>I'd love to hear a bit more of the story, especially if you can tell
>it in a charming and not too rigorous manner. In particular, nothing
>in the paragraph says what's special about Gamma_0. For example,
>suppose I have an ordinal smaller than Gamma_0. How can I give a
>"predicative" proof of induction up to that ordinal? What breaks
>down at Gamma_0?

To get induction up to some ordinal, one first needs to introduce ordinal
notations to speak of the ordinals. The first technical hitch is that
proving that your ordinal notation system "makes sense" (i.e., has all
the self-consistency and uniqueness properties that you want) can in
principle involve an arbitrary amount of arithmetical knowledge. If
you're starting with only PA and the notion of the set of integers, then
this puts an upper bound on how much arithmetical knowledge of this type
you're allowed to assume, and hence an upper bound on the ordinals you
have a right to work with.

The second hitch is that if you're a predicativist, then you're not going
to allow quantification over arbitrary sets but only over the set of
integers and sets that you've already defined. This puts another
restriction on how far your induction can proceed.

Putting these restrictions together gets you up to Gamma_0.

Keith Ramsay

unread,
Aug 11, 2006, 1:22:34 AM8/11/06
to

Aatu Koskensilta wrote:

| P = NP is Pi_2, so no - barring some development in complexity
theory.

I think it's P<>NP that's pi_2, while P=NP is sigma_2. That
P=NP is equivalent to the existence of something (an algorithm
with a polynomial bound) which for each instance of a problem
class has some computable (primitive recursive) property, that
of solving the problem correctly within the bound. By standard
metatheorems, a proof that P<>NP would necessarily be
constructive, while a proof that P=NP might be a pure existence
proof.

Incidentally, there's a trick that allows us almost to state the
algorithm in advance. If we are given a problem in NP to solve
and have a time limit T, enumerate the first sqrt(T) algorithms
in some ordering and run each of them for almost sqrt(T) steps,
with the instance of the problem as input, checking whether
any of them stop, providing us with a "guess" that allows us to
determine that the answer to the problem is "yes". If the
problem is an instance of SAT, for instance, we would look for
a satisfying substitution into the variables. If we don't find one
within the time allowed, we answer "no". Given enough time
we find a verifying guess, if there is one; the key is just how
long it might take to find it.

If P=NP, then there exists an algorithm which finds the correct
"guess" for each positive instance in polynomial time. Suppose
it's the k-th algorithm in the enumeration. Then if the algorithm
requires P(n) steps to find the witness, and we are allowed
time T >> max {k^2, P(n)^2}, then we always find it (if it exists).
So if P=NP then our algorithm does the job, provided it's given
a large enough time limit.

Every polynomial is bounded above on positive integers n by
C(1+n^C) for some constant C. For each n, let f(n) be the smallest
value of C such that for each satisfying instance of SAT of length n,
the above procedure given a time limit of C(1+n^C) finds a satisfying
substitution. Then f(n) is in principle computable, bounded above
if P=NP and unbounded above if P<>NP. It's conceivable that one
could prove that an upper bound exists without knowing how to
exhibit it.

This is all said somewhat more neatly in the theory of "Levin
complexity".

Keith Ramsay

Aatu Koskensilta

unread,
Aug 12, 2006, 10:03:28 AM8/12/06
to
Keith Ramsay wrote:
> Aatu Koskensilta wrote:
>
> | P = NP is Pi_2, so no - barring some development in complexity
> theory.
>
> I think it's P<>NP that's pi_2, while P=NP is sigma_2. That
> P=NP is equivalent to the existence of something (an algorithm
> with a polynomial bound) which for each instance of a problem
> class has some computable (primitive recursive) property, that
> of solving the problem correctly within the bound.

You're right, of course; I probably should have thought for a while
instead of relying on my sometimes unreliable memory.

Nik Weaver

unread,
Aug 24, 2006, 9:00:10 AM8/24/06
to
It might help if I explain where I think the "autonomous progressions"
analysis of predicativity goes wrong.

It's basically a bootstrap idea. You start with some formal system
that is clearly predicatively acceptable, see which ordinals you can
access within that system, use these new ordinals to formulate stronger
systems, and repeat.

That is, we have a hierarchy of formal systems S_a, and once we've
proven that a is an ordinal (more precisely, an ordinal notation) we
get to work in S_a. There are variations but this is the basic idea.

The issue I have raised is: what justifies passing from "a is an
ordinal notation" to "S_a is valid"? Well, the argument looks very
simple. When you look at the definitions it's clear that if S_c is
valid for all c < b then S_b is valid. So let X = {b < a: S_b is not
valid}; from what I just said this set has no smallest element, hence
by the well-ordering property (i.e., a is an ordinal notation) it must
be empty. Thus S_b is valid for all b < a, and we finally conclude
that S_a is valid.

What's wrong with this argument is that forming the set X requires a
comprehension axiom --- an axiom that lets you form "the set of x such
that P(x)" for some family of properties P --- that goes way beyond
anything that anyone thinks is predicative.

Now if you look carefully at the proof that autonomous progressions get
up to Gamma_0 you find that you don't need the full strength of "S_a is
valid". What you need is just "recursive definitions of length a are
legitimate". So what's really at stake is: can a predicativist accept
that if a is well-ordered then constructions of length a are
well-defined? We're tempted to say "of course" only because we're used
to the classically trivial equivalence between induction and recursion
which is proven using an impredicative comprehension axiom.

Here's where it really gets bad for the
predicativity-is-limited-by-Gamma_0 idea. Suppose we just agree, on
whatever grounds, that predicativists can accept the statement
"induction up to a implies recursion up to a". That's *too much*!
This would allow the predicativist to see all at once that every a <
Gamma_0 is an ordinal notation, and hence that Gamma_0 is an ordinal
notation, which he's not supposed to be able to do.

The *only way* to get the Gamma_0 idea to work is to somehow show that
(1) for each a, predicativists can accept "induction up to a implies
recursion up to a" but (2) they cannot accept "for all a, induction up
to a implies recursion up to a". I am not aware of any even remotely
plausible defenses for these two claims.

I make this argument in detail in my paper "Predicativity beyond
Gamma_0", which is available at

http://www.math.wustl.edu/~nweaver/conceptualism.html

I go to some length to show how essentially the same error appears in a
variety of different analyses of predicativism. However, I think it's
fundamentally a very simple error that is obvious once you see it, and
I am somewhat baffled, and saddened, by the obstinate hostility I've
encountered in response to my paper, on the Foundations of Mathematics
discussion list and elsewhere.

(The paper is called "Predicativity beyond Gamma_0" because in the
second part of the paper I try to show that using hierarchies of truth
predicates it's possible to get well beyond Gamma_0 in a predicatively
acceptable way.)

Nik Weaver
Math Dept.
Washington University
St. Louis, MO 63130
nwe...@math.wustl.edu

0 new messages