Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

This Week's Finds in Mathematical Physics (Week 175)

48 views
Skip to first unread message

John Baez

unread,
Dec 29, 2001, 9:00:25 PM12/29/01
to

Also available at http://math.ucr.edu/home/baez/week175.html

December 29, 2001
This Week's Finds in Mathematical Physics (Week 175)
John Baez

I spent this Christmas in Greenwich, England. Over repeated visits
to England I have discovered many fascinating things of which many
Americans are unaware. For example: while in traffic one must drive on
the left side of the road, in escalators one must stand on the right.
You flip switches down to turn on lights. Camels and zebras have
escaped from the Royal Zoo and mated, and their hybrids roam the English
countryside. On the roadside you will occasionally see signs for
"humped zebra crossings". Also, the Royal Observatory in Greenwich
fires a powerful green laser each night to mark the Prime Meridian -
zero degrees longitude.

Four of the last five sentences are true. In particular, you really
*can* see a green laser beam shining due north from the Royal
Observatory, across the Thames, past the Citigroup Building and out
into the night. And speaking of longitude, the day before Christmas
I visited this observatory and had a wonderful time learning how John
Harrison solved the longitude problem.

The longitude problem? Ah, how soon we forget! It's pretty easy to
tell your latitude by looking at the sun or the stars. However, it's
pretty hard to tell your longitude, unless you have a clock that keeps
good time. After all, if you know what time it is in a fixed place,
like Greenwich, you can figure out how far east or west you've gone by
comparing the time you see the sun rise to the time it would rise there.
Unfortunately, until the late 1700's, pendulum clocks didn't work well
at sea, due to the rocking waves. This was a real problem! Ships would
lose track of their longitude, go astray, and sometimes even run aground,
killing hundreds of sailors.

Since England was a big maritime power, in 1714 they set up the Board of
Longitude, which offered a prize of 20,000 pounds to anyone who could
solve this problem. Newton and Halley favored a solution which involved
measuring the angle between the moon and nearby stars and then consulting
a bunch of tables. This was a complicated system that could only work
with the help of an accurate star atlas and a detailed understanding of
the motion of the moon. Newton set to work on the necessary calculations.
John Flamsteed was made the royal astronomer of England, and he set to work
on the star atlas. He moved into the Royal Observatory, and stayed up each
night making observations with the help of his wife.

However, before this "lunar distance method" came online, the watchmaker
John Harrison invented the first of a series of ingenious clocks that worked
well despite rocking waves and fluctuations of temperature. All these
can still be seen at the Royal Observatory - they're very beautiful!
In the process, Harrison developed a whole bunch of cool technology like
ball bearings and the bimetallic strip used in thermostats.

Alas, the Board refused to pay up even when Harrison built a clock that
was accurate to within .06 seconds a day, which was certainly good
enough. Finally King George III persuaded the board to give him the
prize - but by then he was an old man. Luckily, I get the feeling
Harrison was really more interested in building clocks than winning the
prize money. He loved his work... one of the keys to a happy life.

Here's a book that tells his story in more detail:

1) Dava Sobel, Longitude, Fourth Estate Ltd., London, 1996.

I found it in the gift shop of the Observatory. It's a fun read, but
for the technical reader it's frustratingly vague on the technical
details of how Harrisons' clocks actually work.

I also bought this book there:

2) E. G. Richards, Mapping Time: The Calendar and its History, Oxford
U. Press, Oxford, 1998.

Since it's almost New Year's Day, let me tell you a bit what I
learned about calendars!

Mathematical physics has deep roots in astronomy, which may have been
the first exact science. Thanks to astrology, the ancient theocratic
states put a lot of resources into precisely tracking and predicting the
motion of the sun, moon and planets. For example, by 700 BC the
Babylonians had measured the length of the year to be 365.24579 days,
with an error of only .00344 days. Two hundred years later, they had
measured the length of the month to be 29.53014 days - an error of only
2.6 seconds.

If there were 360 days in a year, 30 days in a month, and 12 months in a
year, the ancients would have been happy, since they loved numbers with
lots of divisors. But alas, there aren't! These whole numbers come
tantalizingly close, but not close enough, so the need for accurate
calendars, balanced by the desire for simplicity, kept pushing the
development of mathematics and astronomy forward.

There are also lots of complications I haven't mentioned. I've been
talking about the "mean solar day", the "mean synodic month" and the
"tropical year", but in fact the length of the day and month vary
substantially due to the tilt of the earth's axis, the tilt of the
moon's orbit, and other effects - so actually there are several
different definitions of day, month and year. This was enough to keep
the astronomer-priests in business for centuries. For more on the
physics of it all, try:

3) John Baez, The wobbling of the earth and other curiosities,
http://math.ucr.edu/home/baez/wobble.html

Unfortunately, the Romans, whose calendar we inherit, were real
goofballs when it came to calendrics. Their system was run by a body of
"pontifices" headed by the Pontifex Maximus. In 450 BC these guys
adopted a calendar in which odd-numbered years had 12 months and 355
days, while even-numbered years had 13 months and alternated between 377
and 378 days. The extra month, called Mercedonius, was stuck smack in
the middle of February. Even worse, this system gave an average of 366
and 1/4 days per year - one too many - so it kept drifting out of kilter
with the seasons. The pontifices were authorized to fix things on an ad
hoc basis as needed, but power corrupts, so they started taking bribes
to suddenly advance or postpone the start of the year.

As a result, by the time Julius Caesar became dictator, the calendar was
three months in advance of the seasons! After consulting with the
Alexandrian astronomer Sosigenes, he decided to institute reforms. To
straighten things out, the year 46 BC was made 445 days long. This
was known as the Last Year of Confusion. It featured an extra long
Mercedonius as well as two extra months after December, called Undecimber
and Duodecimber.

The new so-called "Julian calendar" featured 12 months and 365 days,
with an extra day in February every fourth year. The months alternated
nicely between 31 and 30 days, except for February, which only had 30 on
leap years. Unfortunately, Caesar was assassinated in 44 BC before this
system fully took hold. The pontifices ineptly interpreted his orders
and stuck in an extra day every *third* year. This didn't get fixed
until 9 BC, Augustus stopped this practice and decreed that the next 3
leap years be skipped to make up for the extra ones the pontifices had
inserted.

From then on, things went more smoothly, except for a lot of name-grabbing.
When Julius Caesar was assassinated, the Senate took the month of Quintilis
and renamed it "Iulius" in his honor, giving us July. Augustus followed
suit, naming the month of Sextilis after himself - giving us August.
More annoyingly, he stole the last day from February and stuck it on his
own month to make it 31 days long, and did some extra reshuffling so the
months next to his had only 30 - giving us our current messy setup.

The Senate offered to name a month after the next emperor, Tiberius, but
he modestly declined. The next one, Caligula, was not so modest: he
renamed June after his father Germanicus. Then Claudius renamed May
after himself, and Nero grabbed April. Later, Domitian took October and
Antonius took September. The vile Commodus tried to rename all twelve
months, but that didn't stick. Then Tacitus snatched September away
from Antonius... but luckily, all these later developments have been
forgotten!

This is only a tiny fraction of the fascinating lore in Richards' book.
Ever wonder why there are 7 days in a week? That's pretty easy: they're
named after the 7 planets - in the old sense of "planets", meaning
heavenly bodies visible by eye that don't move with the stars. But
here's a harder puzzle! Why are the 7 planets are listed in this order?

Sun (Sunday - Dies Solis)
Moon (Monday - Dies Lunae)
Mars (Tuesday - Dies Martis)
Mercury (Wednesday - Dies Mercurii)
Jupiter (Thursday - Dies Iovis)
Venus (Friday - Dies Veneris)
Saturn (Saturday - Dies Saturnis)

There's actually a nice explanation. However, I won't give it away here.
Can you guess it?

Since ancient science was closely tied to numerology, I can't resist
mentioning some fun facts relating the calendar and the deck of cards.
As you probably know, playing cards come in 4 suites of 13 cards each,
for a total of 52. 52 is also the number of weeks in a year. The 4
suites correspond to the 4 seasons, so there are 13 weeks in each
season, just as there are 13 cards in each suite.

Even better, if we add up the face values of all the cards in the deck,
counting an ace as 1, a deuce as 2, and so on up to 13, we get

(1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10 + 11 + 12 + 13) x 4 = 364,

which is one less than the number of days in a year! The remaining day
corresponds to the "joker", a card which does not belong to any suite.

Many calendars contain "epagomenal days" not included in any month.
For example, the Egyptians had 5 epagomenal days, leaving 360 which they
could split up neatly into 12 months. In a system with one epagomenal
day - the "joker" - the remaining 364 days can be divided not only as

(30 + 30 + 31) x 4,

which allows for two 30-day calendar months and one 31-day calendar
month per season, but also as

13 x 28

which allows for 13 anomalistic months of 28 days each - where an
"anomalistic month" is the time it takes for the moon to come round to
its perigee, where it's as close to the earth as possible.

Putting it all together, we see that the number 364 factors as

13 x 4 x 7,

which corresponds to 13 months, each containing 4 weeks, each containing
7 days - or alternatively to 4 seasons, each containing 13 weeks, each
containing 7 days - or to 4 suites, each containing 13 cards, with an
average face value of 7.

Cute, eh? I'm not sure how much of this stuff is coincidence and how
much was planned out by the mysterious mystics who invented playing
cards. Of course we can't take these whole numbers too seriously -
for example, the anomalistic month is actually 27.55455 days long, not
28. However, a 364-day year *is* mentioned in the the Book of Enoch, a
pseudepigrapical Hebrew text which was found, among other places, in the
Dead Sea Scrolls. In fact, a year of this length was used in Iceland as
late as 1940. The idea of having one epagomenal day and dividing each
season into months with 30, 30 and 31 days has also been favored by many
advocates of calendar reform.

Of course, numerology should always be left to competent mathematicians
who don't actually believe in it.

Here's another nice book:

4) Alain Connes, Andre Lichnerowicz and Marcel Paul Schutzenberger,
A Triangle of Thoughts, AMS, Providence, 2000.

This consists of polished-up transcripts of dialogues (or should I
say trialogues?) among these mathematicians. I wish more good scientists
would write this sort of thing; it's much less strenuous to learn stuff
by listening to people talk than by reading textbooks! It's true that
textbooks are necessary when you want to master the details, but for
the all-important "big picture", conversations can be much better.

This book focuses on mathematical logic and physics, with a strong touch
of philosophy... but it wanders all over the map in a pleasant way - from
Bernoulli numbers to game theory! The conversation is dominated by
Connes, whose name appears on the title in bigger letters than the other
two authors, perhaps because they others are now dead.

There is only one mistake in this book that I would like to complain
about. Following Roger Penrose, Connes takes quasicrystals as evidence
for some mysterious uncomputability in the laws of nature. The idea
is that since there's no algorithm for deciding when a patch of Penrose
tiles can be extended to a tiling of the whole plane, nature must do
something uncomputable to produce quasicrystals of this symmetry. The
flaw in this reasoning seems obvious: when nature gets stuck, it feels
free to insert a *defect* in the quasicrystal. Quasicrystals do not need
to be perfect to produce the characteristic diffraction patterns by which
we recognize them.

But that's a minor nitpick: the book is wonderful! Read it!

In case you don't know: Alain Connes is a Fields medalist, who won the
prize mainly for two things: his work on Von Neumann algebras, and his
work on noncommutative geometry. Now I'll talk a bit about von Neumann
algebras, since you'll need to understand a bit about them to follow the
rest of my description of the paper by Michael Mueger that I have
been slowly explaining throughout "week173" and "week174".

So: what's a von Neumann algebra? Before I get technical and you all
leave, I should just say that von Neumann designed these algebras to be
good "algebras of observables" in quantum theory. The simplest example
consists of all n x n complex matrices: these become an algebra if you
add and multiply them the usual way. So, the subject of von Neumann
algebras is really just a grand generalization of the theory of matrix
multiplication.

But enough beating around the bush! For starters, a von Neumann algebra
is a *-algebra of bounded operators on some Hilbert space of countable
dimension - that is, a bunch of bounded operators closed under addition,
multiplication, scalar multiplication, and taking adjoints: that's the *
business. However, to be a von Neumann algebra, our *-algebra needs one
extra property! This extra property is cleverly chosen so that we can
apply functions to observables and get new observables, which is
something we do all the time in physics.

More precisely, given any self-adjoint operator A in our von Neumann
algebra and any measurable function f: R -> R, we want there to be a
self-adjoint operator f(A) that again lies in our von Neumann algebra.
To make sure this works, we need our von Neumann algebra to be "closed"
in a certain sense. The nice thing is that we can state this closure
property either algebraically or topologically.

In the algebraic approach, we define the "commutant" of a bunch of
operators to be the set of operators that commute with all of them.
We then say a von Neumann algebra is a *-algebra of operators that's
the commutant of its commutant.

In the topological approach, we say a bunch of operators T_i converges
"weakly" to an operator T if their expectation values converge to that
of T in every state, that is,

<psi, T_i psi> -> <psi, T psi>

for all unit vectors psi in the Hilbert space. We then say a von
Neumann algebra is an *-algebra of operators that is closed in the
weak topology.

It's a nontrivial theorem that these two definitions agree!

While classifying all *-algebras of operators is an utterly hopeless
task, classifying von Neumann algebras is almost within reach - close
enough to be tantalizing, anyway. Every von Neumann algebra can be
built from so-called "simple" ones as a direct sum, or more generally a
"direct integral", which is a kind of continuous version of a direct
sum. As usual in algebra, the "simple" von Neumann algebras are defined
to be those without any nontrivial ideals. This turns out to be
equivalent to saying that only scalar multiples of the identity commute
with everything in the von Neumann algebra.

People call simple von Neumann algebras "factors" for short. Anyway,
the point is that we just need to classify the factors: the process
of sticking these together to get the other von Neumann algebras is
not tricky.

The first step in classifying factors was done by von Neumann and
Murray, who divided them into types I, II, and III. This classification
involves the concept of a "trace", which is a generalization of the
usual trace of a matrix.

Here's the definition of a trace on a von Neumann algebra. First, we say
an element of a von Neumann algebra is "nonnegative" if it's of the form
xx* for some element x. The nonnegative elements form a "cone": they
are closed under addition and under multiplication by nonnegative
scalars. Let C be the cone of nonnegative elements. Then a "trace" is
a function

tr: C -> [0, +infinity]

which is linear in the obvious sense and satisfies

tr(xy) = tr(yx)

whenever both xy and yx are nonnegative.

Note: we allow the trace to be infinite, since the interesting von
Neumann algebras are infinite-dimensional. This is why we define
the trace only on nonnegative elements; otherwise we get "infinity minus
infinity" problems. The same thing shows up in the measure theory,
where we start by integrating nonnegative functions, possibly getting
the answer +infinity, and worry later about other functions.

Indeed, a trace very much like an integral, so we're really studying a
noncommutative version of the theory of integration. On the other hand,
in the matrix case, the trace of a projection operator is just the
dimension of the space it's the projection onto. We can define a
"projection" in any von Neumann algebra to be an operator with p* = p
and p^2 = p. If we study the trace of such a thing, we're studying a
GENERALIZATION OF THE CONCEPT OF DIMENSION. It turns out this can be
infinite, or even nonintegral!

We say a factor is "type I" if it admits a nonzero trace for which the
trace of a projection lies in the set {0,1,2,...,+infinity}. We say it's
"type I_n" if we can normalize the trace so we get the values {0,1,...,n}.
Otherwise, we say it's "type I_infinity", and we can normalize the trace
to get all the values {0,1,2,...,+infinity}.

It turn out that every type I_n factor is isomorphic to the algebra of
n x n matrices. Also, every type I_infinity factor is isomorphic to the
algebra of all bounded operators on a Hilbert space of countably infinite
dimension.

Type I factors are the algebras of observables that we learn to love in
quantum mechanics. So, the real achievement of von Neumann was to begin
exploring the other factors, which turned out to be important in quantum
field theory.

We say a factor is "type II_1" if it admits a trace whose values on
projections are all the numbers in the unit interval [0,1]. We say it
is "type II_infinity" if it admits a trace whose value on projections
is everything in [0,infinity].

Playing with type II factors amounts to letting dimension be a
continuous rather than discrete parameter!

Weird as this seems, it's easy to construct a type II_1 factor. Start
with the algebra of 1 x 1 matrices, and stuff it into the algebra of
2 x 2 matrices as follows:

( x 0 )
x |-> ( )
( 0 x )

This doubles the trace, so define a new trace on the algebra of 2 x 2
matrices which is half the usual one. Now keep doing this, doubling the
dimension each time, using the above formula to define a map from the
2^n x 2^n matrices into the 2^{n+1} x 2^{n+1} matrices, and normalizing
the trace on each of these matrix algebras so that all the maps are
trace-preserving. Then take the UNION of all these algebras... and
finally, with a little work, complete this and get a von Neumann algebra!

One can show this von Neumann algebra is a factor. It's pretty
obvious that the trace of a projection can be any fraction in the
interval [0,1] whose denominator is a power of two. But actually,
*any* number from 0 to 1 is the trace of some projection in this
algebra - so we've got our paws on a type II_1 factor.

This isn't the only II_1 factor, but it's the only one that contains a
sequence of finite-dimensional von Neumann algebras whose union is dense
in the weak topology. A von Neumann algebra like that is called
"hyperfinite", so this guy is called "the hyperfinite II_1 factor".

It may sound like something out of bad science fiction, but the
hyperfinite II_1 factor shows up all over the place in physics!

First of all, the algebra of 2^n x 2^n matrices is a Clifford algebra,
so the hyperfinite II_1 factor is a kind of infinite-dimensional
Clifford algebra. But the Clifford algebra of 2^n x 2^n matrices is
secretly just another name for the algebra generated by creation and
annihilation operators on the fermionic Fock space over C^{2n}.
Pondering this a bit, you can show that the hyperfinite II_1 factor is
the smallest von Neumann algebra containing the creation and
annihilation operators on a fermionic Fock space of countably infinite
dimension.

In less technical lingo - I'm afraid I'm starting to assume you know
quantum field theory! - the hyperfinite II_1 factor is the right algebra
of observables for a free quantum field theory with only fermions.
For bosons, you want the type I_infinity factor.

There is more than one type II_infinity factor, but again there is
only one that is hyperfinite. You can get this by tensoring the type
I_infinity factor and the hyperfinite II_1 factor. Physically, this
means that the hyperfinite II_infinity factor is the right algebra of
observables for a free quantum field theory with both bosons and fermions.

The most mysterious factors are those of type III. These can be simply
defined as "none of the above"! Equivalently, they are factors for
which any nonzero trace takes values in {0,infinity}. In a type III
factor, all projections other than 0 have infinite trace. In other
words, the trace is a useless concept for these guys.

As far as I'm concerned, the easiest way to construct a type III factor
uses physics. Now, I said that free quantum field theories had
different kinds of type I or type II factors as their algebras of
observables. This is true if you consider the algebra of *all*
observables. However, if you consider a free quantum field theory on
(say) Minkowski spacetime, and look only at the observables that you can
cook from the field operators on some bounded open set, you get a
subalgebra of observables which turns out to be a type III factor!

In fact, this isn't just true for free field theories. According to a
theorem of axiomatic quantum field theory, pretty much all the usual
field theories on Minkowski spacetime have type III factors as their
algebras of "local observables" - observables that can be measured in
a bounded open set.

Okay, so much for the crash course on von Neumann algebras! Next time
I'll hook this up to Mueger's work on 2-categories.

In the meantime, here are some references on von Neumann algebras in
case you want to dig deeper. For the math, try these:

5) Masamichi Takesaki, Theory of Operator Algebras I, Springer,
Berlin, 1979.

6) Richard V. Kadison and John Ringrose, Fundamentals of the
Theory of Operator Algebras, 4 volumes, Academic Press, New York,
1983-1992.

7) Shoichiro Sakai, C*-algebras and W*-algebras, Springer, Berlin,
1971.

A W*-algebra is basically just a von Neumann algebra, but defined
"intrinsically", in a way that doesn't refer to a particular
representation as operators on a Hilbert space.

For applications to physics, try these:

8) Gerard G. Emch, Algebraic Methods in Statistical Mechanics and Quantum
Field Theory, Wiley-Interscience, New York, 1972.

9) Rudolf Haag, Local Quantum Physics: Fields, Particles, Algebras,
Springer, Berlin, 1992.

10) Ola Bratelli and Derek W. Robinson, Operator Algebras and Quantum
Statistical Mechanics, 2 volumes, Springer, Berlin, 1987-1997.

-----------------------------------------------------------------------
Previous issues of "This Week's Finds" and other expository articles on
mathematics and physics, as well as some of my research papers, can be
obtained at

http://math.ucr.edu/home/baez/

For a table of contents of all the issues of This Week's Finds, try

http://math.ucr.edu/home/baez/twf.html

A simple jumping-off point to the old issues is available at

http://math.ucr.edu/home/baez/twfshort.html

If you just want the latest issue, go to

http://math.ucr.edu/home/baez/this.week.html


J. J. Lodder

unread,
Dec 30, 2001, 7:08:47 PM12/30/01
to
John Baez <ba...@math.ucr.edu> wrote:

snip


> Four of the last five sentences are true. In particular, you really
> *can* see a green laser beam shining due north from the Royal
> Observatory, across the Thames, past the Citigroup Building and out
> into the night. And speaking of longitude, the day before Christmas
> I visited this observatory and had a wonderful time learning how John
> Harrison solved the longitude problem.

Sure, but it can't be the zero meridian, as anyone who has been there
carrying a GPS receiver can tell you. The brass line in Greenwich
observatory is about a 100 m off from 0^o0'00'' nowadays.
How this can be is a FAQ in sci.geo.satellite-nav:
In order to define a grid on the earth a 'datum' is needed,
actually a parameterization of the 'best fiting' ellipsoid.
(Best fitting to the geoid that is)

As it happened, the man who did the work originally,
Airy (indeed -the-), used an ellipsoid that fitted well in England.
Unfortunately this ellipsoid is off-center with repect to the current
internationally accepted datum (WGS84). To make both systems fit the
common reference point was chosen to be the intersection of Airy's zero
meridian with the equator, with the consequence that it is off in
Greenwich.
(It is in the Altantic ocean, no complaints from the residents)
The mark on the ground at Greenwich is of historical interest only.
Some of the 'equator marks' in Africa are off for the same reason.

> The longitude problem? Ah, how soon we forget! It's pretty easy to
> tell your latitude by looking at the sun or the stars. However, it's
> pretty hard to tell your longitude, unless you have a clock that keeps
> good time. After all, if you know what time it is in a fixed place,
> like Greenwich, you can figure out how far east or west you've gone by
> comparing the time you see the sun rise to the time it would rise there.

Actually seeing the sun rise won't do, for various reasons.
You should time the higest point at midday instead.
That's what these guys with sextants in the old sea paintings are doing.

> Unfortunately, until the late 1700's, pendulum clocks didn't work well
> at sea, due to the rocking waves. This was a real problem! Ships would
> lose track of their longitude, go astray, and sometimes even run aground,
> killing hundreds of sailors.

Pendulum clocks still won't do their thing at sea :-)
However, they were quite useful to keep time at fixed stations.
Using them in observatories made it possible to determine the longitude
of at least a couple of fixed points very accurately.

Getting the timing correct was done by watching the motions
(eclipses) of the Jovian moons, which could be accurately predicted.
(After Romer had sorted out the finite speed of light mess).

Best,

Jan

Richard Bullock

unread,
Dec 30, 2001, 7:11:52 PM12/30/01
to

"John Baez" <ba...@math.ucr.edu> wrote in message
news:a0lsfp$i38$1...@glue.ucr.edu...

>
> Also available at http://math.ucr.edu/home/baez/week175.html
>
> December 29, 2001
> This Week's Finds in Mathematical Physics (Week 175)
> John Baez
>
> I spent this Christmas in Greenwich, England. Over repeated visits
> to England I have discovered many fascinating things of which many
> Americans are unaware. For example: while in traffic one must drive on
> the left side of the road, in escalators one must stand on the right.

This is a London thing. In the rest of GB, you stand wherever you want.


> You flip switches down to turn on lights. Camels and zebras have
> escaped from the Royal Zoo and mated, and their hybrids roam the English
> countryside.

hmm

On the roadside you will occasionally see signs for
> "humped zebra crossings".

This is true. There is one about 4 miles from my house.


Also, the Royal Observatory in Greenwich
> fires a powerful green laser each night to mark the Prime Meridian -
> zero degrees longitude.
>

<SNIPPED story about longitude and parts of others.>


> This is only a tiny fraction of the fascinating lore in Richards' book.
> Ever wonder why there are 7 days in a week? That's pretty easy: they're
> named after the 7 planets - in the old sense of "planets", meaning
> heavenly bodies visible by eye that don't move with the stars. But
> here's a harder puzzle! Why are the 7 planets are listed in this order?
>
> Sun (Sunday - Dies Solis)
> Moon (Monday - Dies Lunae)
> Mars (Tuesday - Dies Martis)
> Mercury (Wednesday - Dies Mercurii)
> Jupiter (Thursday - Dies Iovis)
> Venus (Friday - Dies Veneris)
> Saturn (Saturday - Dies Saturnis)
>

The current names of the days are mostly named after norse gods (The Vikings
invaded us about 1200 years ago.) I think the Roman counterparts are the
same as above though.

Sunday - Sun's day
Monday - Moon's day
Tuesday - Tiw's day (god of War)
Wednesday - Wooden's day (god of wisdom)
Thursday - Thor's day (god of thunder)
Friday - Freya's day (god of love and beauty)
Saturday - Saturn's day


> There's actually a nice explanation. However, I won't give it away here.
> Can you guess it?
>

I have seen it something like - if you order the objects according to how
slow they appear to move - supposedly slower objects were seen as more
powerful.
Saturn, Jupiter, Mars, Sun, Venus, Mercury, Moon

Then give the hours in the day these names in order.
1. Saturn, 2. Jupiter, 3. Mars, 4. Sun ..........23. Jupiter, 24. Mars
The following day would then be
1. Sun, 2. Venus, 3. Mercury, ...................23. Venus, 24. Mercury
Next day
1. Moon, 2. Saturn.........23. Saturn, 24. Jupiter
Next day
1. Mars, 2. Sun............23. Sun, 24. Venus
Next day
1. Mercury, 2. Moon........23. Moon, 24, Saturn
Next day
1. Jupiter, 2. Mars........23. Mars, 24. Sun
Next day
1. Venus, 2. Mercury.......23. Mercury, 24. Moon

We are now back to the beginning. The cycle will repeat again with Saturn

The days are than named for the 1st hour in the week.

Day 1 - Saturn - Saturday
Day 2 - Sun - Sunday
Day 3 - Moon - Monday
Day 4 - Mars - Tuesday
Day 5 - Mercury - Wednesday
Day 6 - Jupiter - Thursday
Day 7 - Venus - Friday

It seems a complicated system, but it does give the desired result.
<SNIPPED rest>

Ric

Tony Smith

unread,
Dec 30, 2001, 1:28:28 AM12/30/01
to
In week 175, John Baez writes:
"... every type In factor is isomorphic to the algebra of n x n matrices.
Also,
every type Iinfinity factor is isomorphic to the algebra of all bounded
operators on a Hilbert space of countably infinite dimension. ...
... For bosons, you want the type Iinfinity factor. ...

... the algebra of 2n x 2n matrices is a Clifford algebra,
so the hyperfinite II1 factor is ...
... the [completion of the] UNION of all these algebras ...


a kind of infinite-dimensional Clifford algebra.
But

the Clifford algebra of 2n x 2n matrices is secretly


just another name for the algebra generated by creation

and annihilation operators on the fermionic Fock space over C^{2n}. ...

... There is more than one type IIinfinity factor,


but again there is only one that is hyperfinite.
You can get this by

tensoring the type Iinfinity factor and the hyperfinite II1 factor.
Physically,
this means that the hyperfinite IIinfinity factor


is the right algebra of observables for a free quantum field theory

with both bosons and fermions. ...

... if you consider


a free quantum field theory on (say) Minkowski spacetime,
and look only at the observables that you can cook
from the field operators on some bounded open set,
you get a subalgebra of observables which turns out to be a type III factor!
In fact,
this isn't just true for free field theories.
According to a theorem of axiomatic quantum field theory,
pretty much
all the usual field theories on Minkowski spacetime have type III factors
as their algebras of "local observables" -

observables that can be measured in a bounded open set. ...".

---------------------------------------------------------

Here are some questions I have:

1 - Can any realistic interacting quantum field theory in a local
neighborhood (small enough to look Minkowski-like)
be described by the tensor product
hyperfinite II1 x Iinfinity ?

2 - Didn't Stueckelberg (in Quantum Theory in Real Hilbert Space,
Helvetica Physica Acta 33 (1960) 727-752) show that you can
formulate quantum theory just as well in Real Hilbert Space,
so long as there exists an operator J that acts such that
the Real Hilbert Space is effectively isomorphic to a complex space,
and
if so,
is it necessary (or just conventional) to assume that
the Clifford algebra of 2n x 2n matrices is represented
by Cl(C^(2n)) of the complex vector space C^(2n),
or could it also be represented by Cl(R^(2n)) ?

3 - If (see 2 above) hyperfinite II1 can be written as the completion
of the union of all Cl(R^2n) Clifford algebras,
then,
in light of the periodicity factorization
Cl(N) = Cl(8) x ...(tensor N/8 times)... x Cl(8)
can you regard the "fermionic" part hyperfinite II1 as
being made up of a bunch of copies of Cl(8) Clifford algebras ?

4 - Since Cl(8) has an 8-dim vector part, a 28-dim bivector part,
and two 8-dim half-spinors,
and since those 8+28+8+8 = 52 dimensions can be put together
naturally to make the 52-dim group F4,
and since F4 is the automorphism group of the exceptional
Jordan algebra J3(O) of 3x3 Hermitian Octonion matrices,
then
can you say that the "fermionic" part hyperfinite II1 of
a realistic quantum field theory in a Minkowski neighborhood
has an algebraic structure related to J3(O) ?

5 - Can you combine the "bosonic" Iinfinity part of
a realistic quantum field theory in a consistent way with
the construction of 4 above to get nice J3(O) algebraic
structure of the whole realistic quantum field theory ?


Tony 30 Dec 2001


Jerry Freedman

unread,
Dec 30, 2001, 11:16:13 PM12/30/01
to
ba...@math.ucr.edu (John Baez) wrote in message news:<a0lsfp$i38$1...@glue.ucr.edu>...
[unnecessarily quoted text deleted by moderator]
>
> Here's a book that tells [Harrison's] story in more detail:

>
> 1) Dava Sobel, Longitude, Fourth Estate Ltd., London, 1996.
>
> I found it in the gift shop of the Observatory. It's a fun read, but
> for the technical reader it's frustratingly vague on the technical
> details of how Harrisons' clocks actually work.
>

Honestly Dr. Baez. You should get out more. The book was ths subject
of a mulipart dramatic TV series on PBS I think but I could be wrong.
If its been on TV once, it will be again.

Back to lurking - wish I had Oz' chutzpah to actually ask questions
here.

J. Freedman

mountain man

unread,
Dec 30, 2001, 11:17:59 PM12/30/01
to
....[snip]...

> Mathematical physics has deep roots in astronomy, which may have been
> the first exact science.

....[trim]...


There are literally hundreds, if not thousands, of megalithic
structures scattered across the british isles and western europe.

They have been described as "horizon declinometers" and were
constructed one thousdand years before the great pyramids, ie:
between 4000bc and 3000bc ...... pre-history.


> Since ancient science was closely tied to numerology, I can't resist
> mentioning some fun facts relating the calendar and the deck of cards.

....[snip]...


What evidence are you claiming to support "close ties
between ancient science and numerology"?

Since you mentioned it, do you think there exists any remnant
ties between modern science and numerology (eg: Bodes Law?)

Best wishes for 2002 to one and all.

Pete Brown
Falls Creek, NSW, Oz

John Baez

unread,
Jan 1, 2002, 9:49:38 PM1/1/02
to
In article <4QPX7.46661$HW3....@newsfeeds.bigpond.com>,
mountain man <prfb...@magna.com.au> wrote:

>What evidence are you claiming to support "close ties
>between ancient science and numerology"?

The Pythagorean tradition leaps to mind. Our knowledge
of the Pythagoreans is indirect and comes from sources
written long after the 6th century BC when Pythagoras
is believed to have lived, but the evidence suggests that
the Pythagoreans were not only responsible for brilliant
discoveries like the relation between harmony and rational
ratios of frequencies, but also that these discoveries
arose from a sort of number-mysticism. Aristotle writes
in his Metaphysics:

"... the so-called Pythagoreans, who were the first to
engage in mathematics, advanced this study, and being trained
in it they thought that its principles were the principles of
all things.... since all other things seemed in their whole nature
to be modelled on numbers, and numbers seemed to be the first
things in the whole of nature, they supposed the elements of
numbers to be the elements of all things, and the whole heaven
to be a musical scale and a number."

In his book _Early Greek Science: Thales to Aristotle_, Geoffrey
Lloyd writes:

"Whether or not they were the first to recognise the numerical
rations of musical harmonies, ths certainly provided one their
chief examples to illustrate the role of number. The intervals
of an octave, fifth and fourth could all be expressed in terms
of simple numerical ratios, 1:2, 2:3 and 3:4. Here was a
startling instance of a phenomenon that had no obvious connection
with numbers exhibiting a structure that could be expressed
mathematically, and it seemed to the Pythagoreans that if this
applied to musical intervals, it might well be true of other things
too, if only their mathematical relations could be discovered."

I believe this impulse underlies all of mathematical physics,
from the most ancient to the most modern. The particular example
of "simple numerical ratios of frequencies" reappears over
and over again, in diverse contexts ranging from early attempts
to reconcile the solar and lunar calendars to recent discoveries
concerning the dynamics of the moons of Jupiter, the rings of
Saturn, the Balmer formula for the energy levels of the hydrogen
atom, "period doubling to chaos", and (most fundamentally) the
formula for the energy levels of the harmonic oscillator.

Lloyd continues:

"But to put their achievement into perspective we must add
two things. The first is that the Pythagoreans held not merely
that the the formal structure of phenomena is expressible
in terms of numbers, but also that things consist of numbers:
many of them assumed that things are made of numbers, the
numbers themselves being conceived as concrete material objects.

Secondly, many of the resemblances that the Pythagoreans
claimed to find between things and numbers were quite fantastic
and arbitrary. Thus we are told that they equated justice witht
the number four (the first square number) and marriage with the
number five (this represents the union of male - identified with
the number three - and female - two). Opportunity, apparently,
was identified with the number seven.... Obviously while the
search for numerical ratios proved fruitful in such fields as
the analysis of musical harmonies, it also and more often
led to mumbo-jumbo and crude number-mysticism."

I think if I knew more about Babylonian and Egyptian astronomy
I could find other connections to numerology there; right now
all I know is that they were really in love with nicely divisible
numbers, which is why we have 360 degrees in a circle, 60 minutes
in an hour, and 24 hours in a day. Of course, nicely divisible
numbers have practical uses too! Long division is a pain in the
butt, especially when you're writing on clay tablets.


John Baez

unread,
Jan 1, 2002, 9:50:53 PM1/1/02
to
In article <6c840ad4.01123...@posting.google.com>,
Jerry Freedman <edi...@rcn.com> wrote:

>ba...@math.ucr.edu (John Baez) wrote in message
>news:<a0lsfp$i38$1...@glue.ucr.edu>...

>> Here's a book that tells [Harrison's] story in more detail:


>>
>> 1) Dava Sobel, Longitude, Fourth Estate Ltd., London, 1996.
>>
>> I found it in the gift shop of the Observatory. It's a fun read, but
>> for the technical reader it's frustratingly vague on the technical
>> details of how Harrisons' clocks actually work.

>Honestly Dr. Baez. You should get out more. The book was the subject
>of a multipart dramatic TV series on PBS I think but I could be wrong.


>If its been on TV once, it will be again.

Hmm... are you saying I should get out more, or that I should stop
travelling so much and watch more TV? :-)

By the way, Nigel Seeley told me that this book has more information
on the technical details of Harrison's clocks:

ANDREWES, William J. H. (ed.) The quest for longitude: The Proceedings
of the Longitude Symposium, Harvard University, Cambridge, Massachusetts,
November 4-6, 1993. Cambridge MA: Harvard University Collection of
Historical Scientific Instruments, 1996.


Derek Wise

unread,
Jan 1, 2002, 9:54:06 PM1/1/02
to physics-...@ncar.ucar.edu
JB wrote:
> ....[Augustus] stole the last day from February and stuck it on his

> own month to make it 31 days long, and did some extra reshuffling so the
> months next to his had only 30 - giving us our current messy setup.

In the modern calendar, July has 31 days and is adjacent to August.

DeReK

John Baez

unread,
Jan 2, 2002, 4:51:56 PM1/2/02
to
In article <a0tsoe$f3e$1...@clyde.its.caltech.edu>,
Derek Wise <derek...@itt.com> wrote:

>JB wrote:

Yeah - I only remembered that a few days ago, after writing that
issue of This Week's Finds. As a kid I refused to remember how
many days were in each month, since it seemed hopelessly arbitrary
and ugly - an all-too-human invention, rather than something
intrinsic to the universe. Also, I was never fond of the mnemonic

Thirty days hath September
All the rest I don't remember ....

mainly because so many months end in "-ember" that this mnemonic
would need a mnemonic of its own for me to recall it. It was only
much later that I learned the "knuckles and spaces" method for
keeping track of this information. For some reason I tried this
a few days ago, and then I said "Hey! There's a month with 31 days
next to August! What gives?" I meant to look up the facts in
Richards' book _Mapping Time_, but I forgot. Thanks for reminding
me!

Anyway, here's the deal: the calendar reform of Julius Caesar
gave the months these numbers of days:

Januarius 31
Februarius 29/30
Martius 31
Aprilis 30
Maius 31
Iunius 30
Iulius 31
Sextilis 30
September 31
October 30
November 31
December 30

A nice systematic alternation, though you might why *February* gets
picked on; this is because the earlier Roman calendar had a short
February, and a month called Mercedonius stuck in the middle of
February now and then.

Augustus screwed it up as follows:

Januarius 31
Februarius 29/30
Martius 31
Aprilis 30
Maius 31
Iunius 30
Iulius 31
Augustus 31
September 30
October 31
November 30
December 31

In short: he took the month of Sextilis, renamed it after himself,
gave it an extra day, and switched the alternating pattern of 30 and
31 after that month.

By the way, Richard Bullock gave the "right" answer to my puzzle
about why the 7 planets are listed in the order they are as names
of days of the week. By this I mean he gives the same answer that
Richards does in _Mapping Time_. Astrologers like to list the
planets in order of decreasing orbital period, counting the sun
as having period 365 days, and the moon as period 29 days:

Saturn
Jupiter
Mars
Sun
Venus
Mercury
Moon

For the purposes of astrology they wanted to assign a planet to
each hour of each day of the week. They did this in a reasonable
way: they assigned Saturn to the first hour of the first day,
Jupiter to the second hour of the first day, and so on, cycling
through the list of planets over and over, until each of the
7 x 24 = 168 hours was assigned a planet. Each day was then
named after the first hour in that day. Since 24 mod 7 equals
3, this amounts to taking the above list and reading every third
planet in it (mod 7):

Saturn (Saturday)
Sun (Sunday)
Moon (Monday)
Mars (Tuesday)
Mercury (Wednesday)
Jupiter (Thursday)
Venus (Friday)

I don't think anyone is *sure* that this is how the days got the
names they did; the earliest reference for this scheme is the
Roman historian Dion Cassius (AD 150-235), who came long after the
days were named. However, Dion says the scheme goes back to Egypt.
In the _Moralia_ of Plutarch (AD 46-120) there was an essay entitled
"Why are the days named after the planets reckoned in a different order
from the actual order?" Unfortunately this essay has been lost and
only the title is known.

To bring the subject back to physics: we should see all these attempts
to bring order to time as part of a gradual process of developing
ever more precise and logical coordinate systems for the spacetime
manifold we call our universe. We may laugh at how the Roman pontifices
took bribes to start the year a day early; our descendants may laugh
at how we add or subtract leap seconds from Coordinated Universal
Time (UTC) to keep it in step with the irregular rotation of that lumpy
ball of rock we call Earth (or more precisely, the time system called
UT2, based on the Earth's rotation). How precise will we get? Will
we someday be worrying about leap attoseconds? Leap Planck times?

By the way: can anyone explain the difference between Universal
Time (UT) and the corrected versions UT1 and UT2?


Lewis Mammel

unread,
Jan 2, 2002, 5:19:17 PM1/2/02
to

John Baez wrote:


>
> Cute, eh? I'm not sure how much of this stuff is coincidence and how
> much was planned out by the mysterious mystics who invented playing
> cards. Of course we can't take these whole numbers too seriously -
> for example, the anomalistic month is actually 27.55455 days long, not
> 28. However, a 364-day year *is* mentioned in the the Book of Enoch, a
> pseudepigrapical Hebrew text which was found, among other places, in the
> Dead Sea Scrolls. In fact, a year of this length was used in Iceland as
> late as 1940. The idea of having one epagomenal day and dividing each
> season into months with 30, 30 and 31 days has also been favored by many
> advocates of calendar reform.

Except the readily observable "lunation" is the synodic month of
29.53059 days. The Anomalistic month is closer to the Sidereal and
Tropical months.

Enoch says "In certain fixed months, the moon completes its cycle
every 29 days, in certain others, every 28". ( 1 Enoch 78 )
Hmmm, should be 30 and 29, right? He probably counted days between
the days it's full ... must have.

Lew Mammel, Jr.

Chris Hillman

unread,
Jan 2, 2002, 5:17:51 PM1/2/02
to

On Sun, 30 Dec 2001, John Baez wrote:

> Mathematical physics has deep roots in astronomy, which may have been
> the first exact science. Thanks to astrology, the ancient theocratic
> states put a lot of resources into precisely tracking and predicting the
> motion of the sun, moon and planets. For example, by 700 BC the
> Babylonians had measured the length of the year to be 365.24579 days,
> with an error of only .00344 days. Two hundred years later, they had
> measured the length of the month to be 29.53014 days - an error of only
> 2.6 seconds.
>
> If there were 360 days in a year, 30 days in a month, and 12 months in a
> year, the ancients would have been happy, since they loved numbers with
> lots of divisors. But alas, there aren't! These whole numbers come
> tantalizingly close, but not close enough, so the need for accurate
> calendars, balanced by the desire for simplicity, kept pushing the
> development of mathematics and astronomy forward.

John didn't mention the ancient -Greek- calendar, but interestingly
enough, the well-known "simple continued fraction algorithm" was according
to tradition developed by uhm... Theaetus, in about... uhm... either about
250 B.C. or about 450 B.C. Whatever--- the point is that the algorithm
was originally developed in the context of finding "efficient" rational
approximations to a given real number for purposes of constructing
practical and accurate calendars! In fact, IIRC, Greek tradition ascribed
the first standardized Greek calendar to Theaetus.

(The elementary theory of "finite" continued fractions was apparently
later worked out in detail by Eudoxus in his theory of ratios, and the
beginning of the modern theory is of course due to Lagrange and the 17
year old Galois.)

Unfortunately, I forget the details (and last time I checked, the Greek
calendar wasn't covered in the calendar FAQ over on sci.astro), but IIRC
the ancient Greek calendar was a lunar calendar, so the basic idea is
presumably to create an easily memorized cycle of long (30 day) and short
(29 day) months which doesn't get too far out of synch with the seasons.
Let's say the Greeks knew that the number of days per month is M ~
29.53014. The cfa gives

M ~ 29 + 1/(1+1/(1+1/7))

= 443/15

This suggests taking a cycle in which 443 days are evenly distributed over
15 months. You can do this by taking a cycle of 7 alternating long (30
day) and short (29 day) months, followed by one extra long month-- or some
permutation. Symbolically:

(LS)^7 L = L^8 S^7

Numerically:

8*30+7*29 = 443

A better approximation is

M ~ 29 + 1/(1+1/(1+1/(7+1/(1+1/3)))))

= 1949/66

So at the expense of slightly greater complexity, one can take a cycle in
which 1949 days are evenly distributed over 66 months. To wit: 35 long
months and 31 short months. Symbolically:

[(LS)^7 L LS]^3 [(LS)^7 L] = L^35 S^21

Numerically:

35*30 + 31*29 = 1949

Needless to say, the Greeks didn't used 15 or 66 named months-- presumably
they just added leap days on to the ends of the usual months in accordance
with a 15 month cycle of "leap months". If so, a pretty simple system,
really.

It is amusing to note that the algorithm for constructing the "evenly
distributed sequences" was rediscovered at least 2220 years later in the
context of drawing diagonal lines with a digital plotter! (The same
algorithm is still used to draw diagonal lines on a computer screen.)
AFAIK, the software engineering community is largely unaware of the fact
that this algorithm is in fact very ancient. (It is closely related to
the Euclidean algorithm, which is really essentially the same thing as the
basic continued fraction algorithm, so it's no coincidence that the
Euclidean algorithm was also found by the ancient Greeks [possibly by
someone other than Euclid], although historically it apparently appeared
-after- the discovery of the cfa and its companion "line-pixelating
algorithm".)

I suppose I should also note that what one really wants for calendars is a
-two-dimensional cfa- (or higher dimensional cfa), since really one wants
an efficient rational approximation to the line (RP^2 point)

(365.24579, 29.53014, 1)

No doubt the Greeks discovered that such a thing is very hard to find.
There is a HUGE mathematical literature (IIRC, several thousand papers are
listed in a bibliography published decades ago, and the literature has at
least doubled since then). I'll just say that good old one-dimensional
continued fractions have a fascinating ergodic theory, and that my idea
for "up-down continued fraction algorithms" in higher dimensions (as well
as more sophisticated proposals) explictly involves using SL(2,C)-valued
cocyles; this is the theory of -nonabelian- 1-cocycles which has been
mentioned in this group from time to time.

References:

author = {Lagarias, J. C.},
title = {Number Theory and Dynamical Systems},
booktitle = {The Unreasonable Effectiveness of Number Theory},
editor = {Stefan A. Burr},
series = {Proceedings in Applied Mathematics},
volume = 46,
publisher = {American Mathematical Society},
year = 1992}

author = {McIlroy, M. Douglas},
title = {Number Theory in Computer Graphics},
booktitle = {The Unreasonable Effectiveness of Number Theory},
editor = {Stefan A. Burr},
series = {Proceedings in Applied Mathematics},
volume = 46,
publisher = {American Mathematical Society},
year = 1992}

> There is only one mistake in this book that I would like to complain
> about. Following Roger Penrose, Connes takes quasicrystals as
> evidence for some mysterious uncomputability in the laws of nature.

In his book Noncommutative Geometry, Connes takes the space of Penrose
tilings as the sine qua non of a noncommutative geometry. I might just
mention here that in my dark mutterings in a recent post about
reinterpreting symbolic dynamics in terms of non-Hausdorff sheaves, one of
the goals is to unify/systematize two types of invariants, K-theoretic
invariants arising from various C-* algebras defined by a given system,
and "projective homotopy groups", which were originally defined by Geller
and Propp for two and higher dimensional systems, using branched manifolds
rather than non-Hausdorff sheaves. (A completely different attempt to
unify these concepts, so far -much- more fully worked out, has been
presented by Klaus Schmidt. However, AFAIK, this has not yet resulted in
making projective homotopy into a readily computed invariant.)

> The idea is that since there's no algorithm for deciding when a patch
> of Penrose tiles can be extended to a tiling of the whole plane,

Hmm... I think two ideas might have been confused here, although both
ideas share in common the concept of "obstructions" to extending a local
section to a global section. Here, the space of tilings, S, is a
(non-Hausdorff) sheaf over R^d (say). Local sections correspond to
"patches" of tiles, and global sections, if they exist, correspond to
complete tilings on R^d. Then, the two different ideas are as follows:

First, Penrose showed that if you try to construct a Penrose tiling using
his matching rules, you soon wind up with a patch P which satisfies the
matching rules everywhere but which cannot be extended. So you need to
remove some tiles and try again. That's non-algorithmic and computers
hate it :-/ Here the point is that even when a sheaf has uncountably many
global sections (which is the case for the sheaf of Penrose tilings), it
might nonetheless be the case that -most- local sections cannot be
extended to global sections!

(A slight subtlety here: it is simplest to first consider the sheaf of all
tilings whose vertices like in a particular dense countable set.
Specifically, take the standard five-cycle on R^5; it has two invariant
planes and one invariant line; project Z^5 onto one of the two invariant
planes--- that's the set in which the vertices must lie! This is the
smallest "sheaf of Penrose tilings" which makes sense. The cardinality of
the set of global sections is, at least in standard mathematics, the
cardinality of the continuum. If you choose some disk D, however the
number of local sections over D will be countable. If you now allow
arbitrary translations, your sheaf gets a lot bigger; exponentiate the two
cardinal numbers just mentioned!)

Second, Wang showed that if you consider tilings by squares which are
defined by matching rules, there can be no algorithm for deciding whether
any complete tilings exist for a given set of matching rules. Here the
point is that it could well happen that NO local sections extend to global
sections. The reason is easy to understand: tilings by squares can be
interpreted as the history of running a Turing machine (program) on some
data (initial row of tiles). Of course, the Halting Theorem says that
there is no -general- algorithm for deciding whether a given Turing
Machine will halt if started with a given tape state. This implies Wang's
result, which is analogous to well-known results like the fact that there
is no -general- algorithm which, given a presentation of a group by
generators and relations, can decide whether or not the group is the
trivial group. Wang's work is also related to work by Raphael Robinson.

Let me also mention a third phenomenon in Penrose tilings, which was
discovered by J. H. Conway, and also has the flavor of obstructions to
extensions. Namely, Conway found that it is possible to have a complete
tiling of R^2\disk by Penrose rhombs, everywhere obeying the matching
rules, which cannot be completed to a tiling of R^2! Again, you can think
of this as a local section which covers R^2\disk, which cannot be extended
to a global section. The simplest example consists of a Penrose tiling
with one "worm", an infinite line through the tiling which intersects a
quasiperiodic array of hexagonal patches of rhombs, which can all be
-simultaneously- "flipped" to give another valid tiling; the picture is
something like this:

-VV--V-VV--V-

flips to

-AA--A-AA--A-

Now remove a disk and flip just one half:

***
-AA-*****--V-
***

Now you can see why the hole can't be filled in! Interestingly, as Conway
pointed out, it can however be -moved- along the worm! Clearly, this has
the flavor of a "cocycle read around the boundary". Conway allegedly
classified the types of holes which can occur, up to an appropriate
equivalence relation, but IIRC what JHC told me, he never wrote down his
discoveries in -any- form, and the only published account, in the book
Tilings and Patterns, which was written by G. C. Shephard and based upon a
days-long conversation with Conway, appears to be flawed and indeed to
contradict a fundamental theorem of de Bruijn. (See the sources cited
below if you're morbidly curious.)

Later Conway and Lagarias worked out a theory of "tiling groups" inspired
by this kind of cohomological idea. Thurston wrote an expository article
on this beautiful theory which is unfortunately hard to follow. (It was
published in the American Mathematical Monthly around 1995.) I never could
understand whether this really does include Conway's earlier work on
Penrose tilings as a special case, but certainly if there is any justice,
it does.

I should also point out there is a very simple way to understand Penrose
tilings as two-dimensional symbolic arrays which was found by Robbie
Robinson. This does -not- contradict the "local D_5 symmetry" apparent in
any picture of a Penrose tiling! (The same idea works for generalized
Penrose tilings which are asymmetric.)

Also, Klaus Schmidt has greatly developed the idea of "fundamental
cocycles" in symbolic dynamics. He has used this notion to unify tiling
groups with the projective homotopy groups defined by Geller and Propp
(another very interesting way of trying to study the notion of "holed
tilings" which cannot be completed due to a topological obstruction). If
anyone is interested in following up on the question of what tiling groups
say about Penrose tilings, the papers by Robinson and by Schmidt are the
place to start (after learning the multigrid and oblique tiling
definitions). AFAIK, the problem of computing the projetive homotopy
groups of Penrose tilings is still open, but there are other cohomology
groups which have been defined and computed by Anderson and Putnam
(Anderson was still an undergraduate when he coathored the paper in
question!).

Maybe I should add a caveat: in one-dimensional sheaves arising from
symbolic dynamical systems, the obstructions to completing local sections
are certainly -not- cohomological or even homotopical: the problem is that
some sections have endpoints, but these can be contracted and thus are not
even seen by homotopy, and thus not seen by cohomology. I guess the point
is that in addition to these "obvious" obstructions, in two and higher
dimensions you can have the more subtle kind which are at least partially
captured by tiling groups and projective homotopy groups.

Incidentally, there is a close connection between the cfa and "ribbons"
(for which see Grunbaum and Shephard) in generalized Penrose tilings!
See "Sturmian Dynamical Systems" on my page

http://www.math.washington.edu/~hillman/talks.html

And, there is a nice connection between knot theory, orbits in smooth
dynamical systems, and symbolic dynamics, which is being developed by
Susan Williams and Dan Silver and coauthors:

http://www.mathstat.usouthal.edu/~williams/research.html

Note that the "oblique tiling" definition of Penrose tilings used there
brings out rather clearly the close relationship between Penrose tilings
and phenomena is Diophantine approximation--- this was already understood
in the early seventies by de Bruijn and Conway, but oblique tilings make
this much clearer.

References:

The mathematical theory of Penrose tilings really begins with de Bruijn's
classic paper, which includes a very nice way to understand "worms", and
which defines the tilings in terms of "multigrids", a notion inspired by
de Bruijn's work on counting "configurations" of lines, and which has a
hidden topological flavor:

author = {N. G. de Bruijn},
title = {Algebraic Theory of {P}enrose Tilings of the Plane},
journal = {Indagationes Mathematicae},
volume = 43,
year = 1981,
pages = {38--66}}

For oblique tilings:

author = {Christophe Oguey and Michel Duneau and Andre Katz},
title = {A Geometrical Approach of Quasiperiodic Tilings},
journal = {Communications in Mathematical Physics},
volume = 118,
year = 1988,
pages = {99-118}}

For Wang tilings and Raphael Robinson's work:

author = {Branko Grunbaum and G. C. Shephard},
title = {Tilings and Patterns},
publisher = {W. H. Freeman},
year = 1987}

For local sections in the sheaf of Penrose tilings (but using different
terminology!):

author = {Roger Penrose},
title = {Tilings and Quasicrystals: a Non-Local Growth Problem},
booktitle = {Introduction to the Mathematics of Quasicrystals},
editor = {Marko Jaric},
publisher = {Academic Press},
address = {San Diego},
year = 1991,
pages = {53--80}}

For tiling groups:

author = {J. H. Conway and J. C. Lagarias},
title = {Tiling with Polyominoes and Combinatorial Group Theory},
journal = {Journal of Combinatorial Theory},
series = {A},
volume = 53,
year = 1990,
pages = {183--208}}

author = {William P. Thurston},
title = {{C}onway's Tiling Groups},
journal = {American Mathematical Monthly},
volume = 8,
year = 1990,
volume = 8,
year = 1990,
pages = {757--773}}

For projective homotopy:

author = {William Geller and Jim Propp},
title = {The Projective Fundamental Group of a $\Z^2$-Shift},
journal = {Ergodic Theory and Dynamical Systems},
year = 1995,
pages = {1091--1118}}

For Robinson's Z^2-shift interpretation of Penrose tilings:

author = {E. A. Robinson},
title = {Dynamical Theory of {P}enrose Tilings},
journal = {Ergodic Theory and Dynamical Systems},

(I've lost the date, but it appeared in about 1994-7.)

For Schmidt's work on fundamental cocycles, which combines tiling groups
and projective homotopy groups:

author = {Klaus Schmidt},
title = {Tilings, Fundamental Cocycles and Fundamental Groups of Symbolic
$\ZZ^d $ Actions},

(This was published in Ergodic Theory and Dynamical Systems, but I've lost
the precise reference.)

The paper by Anderson and Putnam is

author = {Jared E. Anderson and Ian F. Putnam},
title = {Topological Invariants for Substitution Tilings
and their Associated $C^\ast$-Algebras},

(It was published in Ergodic Theory and Dynamical Systems, but again I've
lost the precise reference.)

> nature must do something uncomputable to produce quasicrystals of this
> symmetry.

I don't know anyone who knows about Penrose tilings, other than RP, who
understands why he believes this. Certainly I don't.

One relevant point is that there are several -equivalent- definitions of
Penrose tilings besides the original one in terms of matching rules.

In particular, I don't consider the definition of generalized Penrose
tilings by either "multigrids" or "oblique tilings" to be non-algorithmic,
since you can compute -exactly- finite patches as large as you desire
simply by using enough floating point precision. By exactly, I mean that
the combinatorial structure is perfectly correct. (It is true that every
patch, no matter how large, which occurs in any Penrose tiling, must occur
in any other; this is the motivation for considering the space of Penrose
tilings as a noncommutative geometry.) But physical quasicrystals are
obviously always finite in volume! And in the theory of algorithms one is
usually interested in algorithms which perform in finite time tasks which
are defined by a finite amount of data. The oblique tiling definition
gives an algorithm which will draw an arbitrarily chosen patch inside a
disk of given radius in finite time.

(Hmm... OK, it -is- true that an unlucky "arbitrary choice" might well
result in some "bubbles" in the picture where two small alternate patches
overlap. But in that case, by simply increasing your precision you will
-always- eventually be able to eliminate this. So while the algorithm will
surely halt, you can't be quite sure this will happen before our Sun goes
nova.)

Also, even simple changes in the nature of the definition can have a very
big effect: for example, Petra Gummelt showed that if you consider
coverings with small overlaps allowed, then "Penrose shinglings" can be
defined using a single marked decagon. But AFAIK, the famous "Einstein
problem" is still open: can a aperiodic plane tiling be defined using a
single marked tile? (Einstein = "one-stone", get it? I don't know who
named the problem after Einstein, but it wasn't me.) Despite this
suggestion that aperiodic shinglings (which are at least as natural as
mathematical models of physical quasicrystals as are tilings!) might be
significantly more tractable than aperiodic tilings, AFAIK so far very
little work has been done on them.

Reference (for shinglings):

author = {Petra Gummelt},
title = {Penrose tilings as coverings of congruent decagons},

(Oh darn, this preprint was dated about 1995 and was available on the web
but now seems to be lost. Eat your heart out :-/ Some other folk
published a rigorous proof of her claim, but I've lost that reference.)

> The flaw in this reasoning seems obvious: when nature gets stuck, it
> feels free to insert a *defect* in the quasicrystal.

George Onoda came up with a nice idea which is at least obliquely relevant
to this discussion.

It turns out that given a patch P (a "legal" patch in the sense that it
satisfies the matching rules for Penrose rhomb tilings), the matching
rules almost always force a larger patch K(P). The only exception is if P
= K(P). These special patches, called "kingdoms", tend to look like
rhombuses. If you are familiar with closure operators in lattice theory,
K(.) is one.

Onoda in effect noticed that these "kingdoms" are the building blocks of
the "empires" of Conway (hence the name). The empire E(P) of a patch P is
essentially the collection of patches which are "forced" by P; this
concept makes sense for any symbolic dynamical system but for Penrose
tilings one can say much more: E(P) consists of the kingdom K(P) flanked
by a quasiperiodic array of strictly smaller kingdoms K(Q_j). Up to
translation, there are of course only a finite number of such strictly
smaller kingdoms. See "Empires in Sturmian Systems" on my page

http://www.math.washington.edu/~hillman/talks.html

Back to Onoda's idea: he noticed further that you can grow a Penrose
tiling rather efficiently using a "partially non-deterministic" algorithm.
Namely, given a legal seed patch P, form K(P). Now, Onoda observed in his
numerical experiments, there are special locations on the "border" of K(P)
where there are precisely -two- choices for placing another tile. Make one
of these choices to form the patch P'. Now form K(P'), and so forth.
Onoda's Bell Labs colleagues, Socolar and DiVencenzo, then proved that
this construction really does work!

Mathematical physicists like Michael Baake study theoretical models of
-physical- quasicrystals. Some of these turn out (via positron-emission
microscopy which can image individual atoms) to really be almost perfect
three-dimensional analogues of Penrose tilings, or stacked layers to
two-dimensional Penrose tilings. But others turn out to mimic the
diffraction spectrum of such a quasiperiodic array of vertices (atoms),
but are not really almost-periodic in the geometric sense. Anyway, my
understanding is that no-one thinks that the OSD algorithm is a realistic
model for how truly almost-periodic quasicrystals form. And since the
algorithm is not fully deterministic, it does not disprove Penrose's
claim.

BTW, Steinhardt and Jeong proposed a physical mechanism which was widely
publicized by, among others, myself. The idea is to reformulate the
definition of Penrose tilings in terms of a kind of free energy.
Unfortunately, a few years ago a tiling expert patiently explained to me
why their paper, while mathematically correct, cannot give a satisfactory
-physical- explanation of the formation of physical quasicrystals. Even
more unfortunately, by now I have forgotten the details both of
Steinhardt's proposal and of the counterargument...

References:

For the OSD algorithm:

author = {J. E. S. Socolar},
title = {Growth Rules for Quasicrystals},
booktitle = {Quasicrystals: the State of the Art},
editor = {D. P. DiVencenzo and P. J. Steinhardt},
series = {Directions in Condensed Matter Physics},
volume = 11,
publisher = {World Scientific},
address = {Singapore},
year = 1991}

For the Steinhardt-Jeong definition of Penrose tilings:

author = {P. J. Steinhardt and H.-C. Jeong},
title = {New Rules for Constructing {P}enrose Tilings May Shed Light on
How Quasicrystals Form},journal = {Nature},
volume = 382,
year = 1996,
pages = {433-5}}

> Quasicrystals do not need to be perfect to produce the characteristic
> diffraction patterns by which we recognize them.

Indeed, many physical quasicrystals exhibit a strict -statistical- "long
range order" but on the scale of atoms they are not almost-periodic.
"Physical quasicrystals", I should say, are usually loosely defined by
saying that they are substances whose diffraction spectrum looks discrete,
but does not correspond to the spectrum of any crystal. Quite a few of
them are known by now.

> In the algebraic approach, we define the "commutant" of a bunch of
> operators to be the set of operators that commute with all of them.
> We then say a von Neumann algebra is a *-algebra of operators that's
> the commutant of its commutant.

Can't resist pointing out that this is the same construction used in the
empire-cylinder duality. See "What is a Concept?" on my page

http://www.math.washington.edu/~hillman/personal.html

> with the algebra of 1 x 1 matrices, and stuff it into the algebra of
> 2 x 2 matrices as follows:
>
> ( x 0 )
> x |-> ( )
> ( 0 x )
>
> This doubles the trace, so define a new trace on the algebra of 2 x 2
> matrices which is half the usual one.

There's a very nice relationship between almost finite C-* algebras
constructed in this way and dimension groups, a dynamical invariant of
one-dimensional symbolic dynamical systems. See "Brattelli diagrams" in

author = {Edward G. Effros},
title = {Dimensions and {C}*-algebras},
publisher = {American Mathematical Society},
address = {Providence, RI},
series = {Regional conference series in mathematics},
volume = 46,
year = 1981}

It's been quite a while since I've thought about this stuff, so I hope I
haven't muddled anything.

Chris Hillman

Home page: http://www.math.washington.edu/~hillman/personal.html

Lewis Mammel

unread,
Jan 3, 2002, 12:17:57 AM1/3/02
to

John Baez wrote:
>
> I believe this impulse underlies all of mathematical physics,
> from the most ancient to the most modern. The particular example
> of "simple numerical ratios of frequencies" reappears over
> and over again, in diverse contexts ranging from early attempts
> to reconcile the solar and lunar calendars to recent discoveries
> concerning the dynamics of the moons of Jupiter, the rings of
> Saturn, the Balmer formula for the energy levels of the hydrogen
> atom, "period doubling to chaos", and (most fundamentally) the
> formula for the energy levels of the harmonic oscillator.

I would say the Balmer Series is most fundamental, as it represents
the numerical foundation of the material world as we know it.
Cf. Plato's Timaeus, where he postulates the geometric shapes
which form the elements. The resemblance to atomic orbitals
is positively uncanny.

> I think if I knew more about Babylonian and Egyptian astronomy
> I could find other connections to numerology there; right now
> all I know is that they were really in love with nicely divisible
> numbers, which is why we have 360 degrees in a circle, 60 minutes
> in an hour, and 24 hours in a day. Of course, nicely divisible
> numbers have practical uses too! Long division is a pain in the
> butt, especially when you're writing on clay tablets.

Plato cites 5040 ( equal to 7! ) as the population of his
ideal state in LAWS-V, as it can be divided "... into fifty-nine
quotients and no more, ten of them, from unity onward, being successive.

Lets see 2^4 * 3^2 * 5 * 7 gives 5*3*2*2 = 60 divisors, so he wasn't
counting 5040 as a divisor of itself.

The condition of divisibility by 1 through 10 is met, of course, by
7!/2 = 2520. I've always wondered if Plato realized this. Well, plenty
of his friends did, presumably, and I suppose he did too, but why 7!
when he doesn't mention this product as the defining feature?

This is supposed to be a practical consideration with the view
of dividing the population into equal groups for different purposes,
but it certainly strikes one as a bizarre criterion. You get the
feeling that there was a metaphysical issue of perfect balance
at stake. In a way, it's kind of an "out", maybe. He knew perfectly
well that this ideal was impractical, and hence various troubles
and disharmonies could be attributed to a failure to adhere to it.

Lew Mammel, Jr.

mountain man

unread,
Jan 3, 2002, 12:19:12 AM1/3/02
to
Thanks John!!

Interesting research on the Pythagoreans.
You picked a winning example there, and I'll
not quibble with the findings.

I'd like to take the opportunity of wishing you
the best in 2002 and well beyond, especially in
your Mathematical Physics series, which is forming
up into a tremendous resource for many.


Pete Brown
Falls Creek, NSW, Oz.


"John Baez" <ba...@galaxy.ucr.edu> wrote in message
news:a0p0f5$683$1...@glue.ucr.edu...

[Moderator's note: full text of quoted article deleted by moderator -- KS]

Robert C. Helling

unread,
Jan 3, 2002, 8:51:36 AM1/3/02
to
On Wed, 2 Jan 2002 21:51:56 +0000 (UTC), John Baez <ba...@galaxy.ucr.edu> wrote:

>By the way: can anyone explain the difference between Universal
>Time (UT) and the corrected versions UT1 and UT2?

I asked google for UTC UT1 UT2 and found
http://www.apparent-wind.com/gmt-explained.html
that contains

"There are actually a couple of variants of UT. UT as determined by actual
astronomical observations at a particular observatory is known as
UT0 ("UT-zero"). It is affected by the motion of the earth's
rotation pole with respect to the crust of the earth. If UT0 is
corrected for this effect, we get UT1 which is a measure of the
true angular orientation of the earth in space. However, because
the earth does not spin at exactly a constant rate, UT1 is not a
uniform time scale. The variation in UT1 is dominated by seasonal
oscillations due primarily to the exchange of angular momentum
between the atmosphere and the solid earth and seasonal tides. In
an effort to derive a more uniform time scale, scientists
established UT2. UT2 is obtained from UT1 by applying an adopted
formula that approximates the seasonal oscillations in the earth's
rotation. However, due to other variations including those
associated with the secular effects of tidal friction (the earth's
spin is continually but gradually slowing down), high frequency
tides and winds, and the exchange of angular momentum between the
earth's core and its shell, UT2 is also not a uniform time scale."

I also found http://www.wikipedia.com/wiki/Universal_time which is a bit
more technical.

Robert

--
.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oOo.oO
Robert C. Helling Institut fuer Physik
Humboldt-Universitaet zu Berlin
print "Just another Fon +49 30 2093 7964
stupid .sig\n"; http://www.aei-potsdam.mpg.de/~helling

Ralph E. Frost

unread,
Jan 3, 2002, 7:44:17 AM1/3/02
to

John Baez <ba...@galaxy.ucr.edu> wrote in message
news:a0u6d7$l65$1...@glue.ucr.edu...

> To bring the subject back to physics: we should see all these attempts
> to bring order to time as part of a gradual process of developing
> ever more precise and logical coordinate systems for the spacetime
> manifold we call our universe. We may laugh at how the Roman pontifices
> took bribes to start the year a day early; our descendants may laugh
> at how we add or subtract leap seconds from Coordinated Universal
> Time (UTC) to keep it in step with the irregular rotation of that lumpy
> ball of rock we call Earth (or more precisely, the time system called
> UT2, based on the Earth's rotation). How precise will we get? Will
> we someday be worrying about leap attoseconds? Leap Planck times?

Your mention of "bringing order to time" and continually readjusting our
time scales and chronometers to keep in step with the varied and irregular
rotations and spins of Earth and its neighbors and their similarly
gyrating orbitals oftentimes leave me with the impression that the focus
is put too much on the pristine regularity of our clocks and accounting
systems and not enough on the underlying, wildly irregular field-field
interactions and bobblings of the dancing many-bodies that we continually
strive to renormalize.

Without the precise accounting system the other would not come into focus.
True. But it seems to me what is being brought to order is the
interaccomodative underlying spins and oscillations and field-field
jostlings.

Is there a list somewhere of all the adjustments made to the calendar in
modern times? Fom that, can anyone say whther we are spiralling in,
spiralling out, just oscillating about merrily, or merely going through mild
variations in mass density?


Thomas Larsson

unread,
Jan 3, 2002, 5:34:57 PM1/3/02
to Sci. Physics. Research (E-mail)
John Baez <ba...@math.ucr.edu> wrote in message news:a0lsfp$i38$1...@glue.ucr.edu...

[ ... an long introduction to von Neumann algebras ]

Thank you for a very nice explanation of von Neumann algebras. This is one of
those things that I always wanted to learn, but I have never been motivated
enough to work through the technical literature.

Does anyone know why a simple von Neumann algebra is called a factor?

> The most mysterious factors are those of type III. These can be simply
> defined as "none of the above"! Equivalently, they are factors for
> which any nonzero trace takes values in {0,infinity}. In a type III
> factor, all projections other than 0 have infinite trace. In other
> words, the trace is a useless concept for these guys.
>

Even though I know little about von Neumann algebras, I picked up some
about type III factors during the late 80s. There is a rather intriguing web
of connections between type III factors and several parts of 2d mathematical
physics. Since I am rather rusty on this subject and quote from memory,
there might be some mistakes in the formulas below, but the big picture is
correct and known, albeit perhaps not so well known.

Type III factors (or maybe just hyperfinite ones) were classified by Vaughn
Jones in the mid 80s. This work gave him the Fields medal in 1990, together
with Witten among others. The projectors e_i satisfy an associative algebra,
which was known as the Temperley-Lieb algebra when I grew up, but which nowadays
has Jones' name stuck to it. I think it looks like this:

e_i e_{i+1} e_i = e_{i+1}.

From the e_i's you can construct operators R_i, satisfying

R_i R_{i+1} R_i = R_{i+1} R_i R_{i+1}.

We can visualize this equation as operations on a n-stranded braid, if i=1,2,...,n.
Let R_i be the diagram where the i:th strand crosses over the i+1:st strand:

\ /
\ /
\
/ \
/ \
i i+1

Then the equation for the R_i looks like this:

\ / | | \ /
\ / | | \ /
\ | | \
/ \ / \ / \
/ \ / \ / \
| \ = \ |
| / \ / \ |
\ / \ / \ /
\ | | \
/ \ | | / \
/ \ | | / \

which is a basic equation in knot theory, probably called Reidemeister move number 3.
The R_i's can also be related in a straightforward way (but I have forgotten how)
to the Yang-Baxter equation, which is important for exact solvability in statistical
physics:

R_12 R_13 R_23 = R_23 R_13 R_12.

Anyway, Jones proved that the dimensions of type III factors belong to a discrete
series:

q = 4 cos^2( pi/(m+1) ), m integer.

Here I think that q is the dimension, but maybe sqrt(q) is. Another place were the
same numbers arise is in the q-state Potts model, which is a statistical lattice
model with Hamiltonian

H = -J sum_<ij> delta(s_i, s_j).

On each node i of a lattice (2-dimensional, say) there is a spin s_i which can take
on q different values, and the sum runs over all nearest neighbors i and j. In
particular, the 2-state Potts model is nothing but the Ising model. As usual, we
define the partition function Z = Tr exp(-H) (Tr = sum over all configurations, and
I use units such that k_B T = 1). It turns out that Z can be written as a sum over
all graphs G that can be embedded in the lattice,

Z = sum_G u^L q^C,

where u = exp(-J)+1, L is the number of links in G and C is the number of clusters,
i.e. the number of connected components of the graph. With this definition, the
Potts model can be extended to arbitrary values of q, not just integers. For q<=4,
the 2D q-state Potts model undergoes a second-order phase transition for a critical
value of u. It is described by a conformal field theory (CFT) with central charge

c = 1 - 6/m(m+1).

For m integer the CFT is unitary, but not otherwise. The cool thing is that the m
in the q formula is the same m as in the c formula. E.g., m = 3 =>
q = 2 and c = 1/2, i.e. the Ising model. m = 5 => q = 3 and c = 8/10, the 3-state
Potts model. The limit q -> 1 (not q = 1, which is trivial) is related to the
percolation problem, and in particular the magnetic critical exponent is related
to the fractal dimension of the incipient infinite percolation cluster.

That von Neumann algebras are related to the Potts model is not so surprising,
since Temperley and Lieb first found the TLJ algebra when working on it. However,
that there is a connection between type III dimensions and unitary CFTs is highly
nontrivial, although the reasons for this remain mysterious to me.

Note that the limit point when m -> infinity, i.e. q = 4 or c = 1, must be special.
In fact, as was first noticed by Rodney Baxter (as in Yang-Baxter), there is a
connection to the four-color problem in graph theory. In this field, the partition
function Z goes under the name dichromatic polynomial. If we specialize to the
anti-ferromagnetic Potts model at zero temperature, i.e. let J -> -infinity,
adjascent spins are forced to take on different values. In other words, we must color
the map so that neighboring countries have different colors. In this limit Z is
called the chromatic polynomial for q colors. It has been conjectured (maybe proved)
that for generic planar graphs, the only zeroes of the chromatic polynomial occur
at the so-called Beraha numbers, which are precisely those q's which are dimensions
of type III factors. In particular, q = 1, 2, 3 are included - you cannot color a
generic map with less than four colors. q = 4 is special: there are colorings, but
very few compared to q >= 5, and this is probably why it was so hard to prove the
four-color theorem.

So there are a lot of cool connections between type III factors and mathematical
physics in 2 dimensions. Nothing similar is known in 3 dimensions, except for
something very important: objects appearing in nature are generically 3-dimensional
(considering only static phenomena, so time is irrelevant).


Ralph E. Frost

unread,
Jan 3, 2002, 5:36:39 PM1/3/02
to

John Baez <ba...@galaxy.ucr.edu> wrote in message
news:a0u6d7$l65$1...@glue.ucr.edu...
> In article <a0tsoe$f3e$1...@clyde.its.caltech.edu>,
> Derek Wise <derek...@itt.com> wrote:
>
> >JB wrote:
>
> >> ....[Augustus] stole the last day from February and stuck it on his

> By the way: can anyone explain the difference between Universal


> Time (UT) and the corrected versions UT1 and UT2?

I haven't found UT2 but, compliments of
http://aa.usno.navy.mil/faq/docs/UT.html

"One can think of UT1 as being a time determined by the rotation of the
Earth, over which we have no control, whereas UTC is a human invention. It
is relatively easy to manufacture highly precise clocks that keep UTC, while
the only "clock" keeping UT1 precisely is the Earth itself. Nevertheless, it
is desirable that our civil time scale not be very different from the
Earth's time, so, by international agreement, UTC is not permitted to differ
from UT1 by more than 0.9 second. When it appears that the difference
between the two kinds of time may approach this limit, a one-second change
called a "leap second" is introduced into UTC. This occurs on average about
once every year to a year and a half. "

The differences between UTC and UT1 (cesium atom vibration counts and earth
spin/orbit consistency) arise mostly because of slight variations in the
"strengths" of magnetic and gravitational fields in the local region, also
influenced by solar wind and CME events, and, on the cesium side, due to
small variations in regional mass-density.


For some other facts about hydrogen maser clocks in use at the US Naval
Observatory: http://tycho.usno.navy.mil/mc2.html


Lars Henrik Mathiesen

unread,
Jan 3, 2002, 5:37:09 PM1/3/02
to
ba...@galaxy.ucr.edu (John Baez) writes:
>By the way: can anyone explain the difference between Universal
>Time (UT) and the corrected versions UT1 and UT2?

UT0 is based on observations (meridian passages or the modern
equivalent) at each individual observatory.

UT1 is corrected for a wobble in the Earth's axis (7 meter amplitude
at the poles, 14 month period), which affects the UT0 time difference
between two observatories.

UT2 is corrected for known seasonal variations in rotational rate
(which don't affect time differences).

UTC is based on international atomic time (TAI), in the sense that UTC
- TAI is always a whole number of (TAI) seconds, but it's kept within
.9 seconds of UT1 by inserting leap seconds.

Lars Mathiesen (U of Copenhagen CS Dep) <tho...@diku.dk> (Humour NOT marked)

Dean Calver

unread,
Jan 3, 2002, 10:19:35 PM1/3/02
to

"Chris Hillman" <hil...@math.washington.edu> wrote in message
news:Pine.OSF.4.33.011230...@goedel1.math.washington.edu...

> It is amusing to note that the algorithm for constructing the "evenly
> distributed sequences" was rediscovered at least 2220 years later in the
> context of drawing diagonal lines with a digital plotter! (The same
> algorithm is still used to draw diagonal lines on a computer screen.)
> AFAIK, the software engineering community is largely unaware of the fact
> that this algorithm is in fact very ancient. (It is closely related to
> the Euclidean algorithm, which is really essentially the same thing as the
> basic continued fraction algorithm, so it's no coincidence that the
> Euclidean algorithm was also found by the ancient Greeks [possibly by
> someone other than Euclid], although historically it apparently appeared
> -after- the discovery of the cfa and its companion "line-pixelating
> algorithm".)

I never knew that!

Bresenham's line algorithm is the second algorithm you learn in computer
graphics (after pixel plotting), these days its implemented directly in
hardware. It generalises to higher dimensions (I've seen 6D versions used in
some rendering code) so gets used all over the place, from medical imaging
(ray-marching CAT scans) to the movies (heightfield tracing) and just to
draw lines of course.

If your reading this on a graphics terminal, its almost certainly being used
to display the window.

Most computer graphics people are amazed that it is 30-40 years old (almost
nothing lasts that long ;-) to find out its over 2000 year old is utterly
amazing.

Bye,
Deano

Dean Calver
Creature Labs


Ralph E. Frost

unread,
Jan 4, 2002, 3:43:48 AM1/4/02
to

Lars Henrik Mathiesen <tho...@diku.dk> wrote in message
news:a11pkd$nqv$1...@munin.diku.dk...

> ba...@galaxy.ucr.edu (John Baez) writes:
> >By the way: can anyone explain the difference between Universal
> >Time (UT) and the corrected versions UT1 and UT2?
>
[...]

> UT1 is corrected for a wobble in the Earth's axis (7 meter amplitude
> at the poles, 14 month period), which affects the UT0 time difference
> between two observatories.

I am having difficulty picturing this. Is this like the polar axis does a
little tilting back and forth, 7 meters toward the outside (midnight side)
and seven months later, 7 meters to the inward ((sun side) , like the ball
of the earth is slightly rocking (partial spin at right angle to the major
axis of rotation) so that, for instance, Greenwich (and everthing else
along a line of latitude) rolls "down" a little, and then back "up" as would
be the action of a person holding a steering wheel in both hands and
oscillating the wheel leftish, then rightish a little.

Is that the motion every seven months?


Thomas Larsson

unread,
Jan 4, 2002, 2:55:01 PM1/4/02
to Sci. Physics. Research (E-mail)
Thomas Larsson <Thomas....@hdd.se> wrote in message news:a12mah$fds$1...@inky.its.caltech.edu...

[... a number of things on von Neumann algebras, some of which are not true]

Last night I looked up operator algebras in Connes' big red book on
Noncommutative geometry. Here is what I should have written.

> Type III factors (or maybe just hyperfinite ones) were classified by Vaughn
> Jones in the mid 80s.

Jones classified type II_1 subfactors, i.e. how one type II_1 factor N
sits inside another one M. The number q is called the index
[M:N]. Somehow the index measures how N many times occurs in M. Apart
from the discrete series q = 4 cos^2( pi/(m+1) ), m integer, the index
can also take on any value q > 4.

I should have realized that q could not be the dimension of a type III
factor, since John wrote that the trace does not exist for such
factors.

> which was known as the Temperley-Lieb algebra when I grew up, but
> which nowadays has Jones' name stuck to it. I think it looks like this:
>
> e_i e_{i+1} e_i = e_{i+1}.

The correct definition of the TLJ algebra reads

1. e_i e_j = e_j e_i, if |i-j| > 1

2. e_i e_{i+1} e_i = = q^-1 e_i,
e_{i+1} e_i e_{i+1} = q^-1 e_{i+1}

3. e_i = e_i^* = e_i^2

> From the e_i's you can construct operators R_i, satisfying
>
> R_i R_{i+1} R_i = R_{i+1} R_i R_{i+1}.
>

You construct them like this: R_i = (q+1) e_i.

The rest of my previous post is probably true, provided that you substitute
"type III dimension" with "type II_1 index" everywhere.


Lars Henrik Mathiesen

unread,
Jan 4, 2002, 11:22:09 AM1/4/02
to
"Ralph E. Frost" <ref...@dcwi.com> writes:

>Lars Henrik Mathiesen <tho...@diku.dk> wrote:

>> UT1 is corrected for a wobble in the Earth's axis (7 meter amplitude
>> at the poles, 14 month period), which affects the UT0 time difference
>> between two observatories.

>I am having difficulty picturing this. Is this like the polar axis does a
>little tilting back and forth, 7 meters toward the outside (midnight side)
>and seven months later, 7 meters to the inward ((sun side) , like the ball
>of the earth is slightly rocking (partial spin at right angle to the major
>axis of rotation) so that, for instance, Greenwich (and everthing else
>along a line of latitude) rolls "down" a little, and then back "up" as would
>be the action of a person holding a steering wheel in both hands and
>oscillating the wheel leftish, then rightish a little.

>Is that the motion every seven months?

It's the right idea, but my original statement can expanded a bit.

The total angular momentum of the Earth isn't changing at this
timescale; but the Earth isn't a rigid body, and angular momentum can
be transferred between different parts. The wobble is a movement of
the rigid crust relative to the axis of the total angular momentum.

In other words: there is no single spot to put the flag for the south
pole --- the proper place moves around on the surface of the Earth.

However, the wobble isn't side-to-side like your steering wheel
analogy. The actual motion is somewhat irregular, but the best
approximation is a circle (radius 7m). This looks a lot like the
precession of a spinning top --- but it's not the same mechanism.

If you use a coordinate grid for the Earth's surface with 90S fixed in
the average location, then at any given time, each circle of latitude
will actually be 7 meters low at one point, relative to the true axis
of rotation, and 7 meters high at the opposite point --- and the low
point will move all the way around the Earth in 14 months.

But that's not why UT1 needs compensation --- it's because the lines
of longitude are skewed too. If the axis of rotation is seven meters
off towards Canada, meridian passages at Greenwich and in New Zealand
will be retarded by a few milliseconds (if my math is right), and
those in Japan and South Africa will be advanced about the same.

John Baez

unread,
Jan 4, 2002, 4:32:01 PM1/4/02
to
Chris Hillman <hil...@math.washington.edu> wrote:

>John didn't mention the ancient -Greek- calendar, but interestingly
>enough, the well-known "simple continued fraction algorithm" was according
>to tradition developed by uhm... Theaetus, in about... uhm... either about
>250 B.C. or about 450 B.C.

Since the flourishing of classical Greek culture occurred around
500-350 BC, and the Macedonians (namely Alexander the Great) took
over the city-states roughly around 350 BC, I'd have chosen 450 BC
if I were forced at gunpoint to choose between these two possibilities.

However, it looks like Theaetetus' dates are 417-369 BC:

http://www-groups.dcs.st-andrews.ac.uk/~history/Mathematicians/Theaetetus.html

which makes sense, since he shows up prominently in a dialog by
Plato about mathematics, and Plato's dates are similar.

I hope you forgive this nitpick, but my superpartner studies
Greek and Chinese classical philosophy and the history of
science, and when she mentions a year, it's usually BC unless
otherwise specified! Gradually this has rubbed off on me,
so the difference between 250 BC and 450 BC seems enormous to
me.

Basically, by 250 BC, the classical Greek city-states had given
way to Hellenism, and a lot of the action in science was happening
in north Africa, like Alexandria, instead of Greece. You've got
folks like Eratosthenes and Euclid and Archimedes (in Sicily)....

>Whatever--- the point is that the algorithm
>was originally developed in the context of finding "efficient" rational
>approximations to a given real number for purposes of constructing
>practical and accurate calendars!

Yeah. The Greeks even had a special name for this algorithm:
antihypohairesis! See this old article by me on s.p.r.... it
flatters you at the end, so read it even if it's long and boring:

..................................................................

>In article <84jql6$8...@charity.ucr.edu>,
>Bill Jefferys <bi...@clyde.as.utexas.edu> wrote:

>John Baez wrote:

>>For example, remember how Keith Ramsay and Phillip Helbig pointed
>>out the existence of two kinds of day? There's the 24-hour
>>"solar day" and the 24-hour, 56-minute and 4-second "sidereal
>>day", depending on whether you watch the sun or the stars going
>>around.

>But note: The "sidereal" day is really reckoned with respect to the vernal
>equinox, which moves relative to the stars! It's dependent upon the
>precession rate.

Yuck. I didn't know that. Is there a name for the "true sidereal day",
where we keep track of how the earth rotates relative to the fixed stars,
rather than the vernal equinox?

>None of the other sidereal periods are so reckoned. (e.g.,
>the tropical year is reckoned with respect to the vernal equinox, not the
>sidereal year, which is really reckoned with respect to the stars, as best
>as we know how).

Yeah, that makes me not like this use of the term "sidereal day". But
then astronomy, being the oldest science, seems full of strange
terminology that goes way back and is too deeply rooted to change.

1) The SIDEREAL MONTH is the time it takes for the moon to orbit
>>360 degrees around the earth as viewed from the fixed stars. It's
>>27.321661 days long. (Don't ask me if these are solar or sidereal days -
>>I assume the former, but the thing I'm reading doesn't say.)

>Your assumption is correct. Days of 86,400 SI seconds are the standard unit
>for timekeeping in astronomy when "days" are mentioned without qualifier.

Whew, good - I was starting to get paranoid, with all these
different kinds of days, months and years floating around.

>>This means that eclipses occur in a pattern which depends
>>on how the synodic and draconic months drift in and out of
>>phase.
>>
>>And *this* means that to predict eclipses, you need to find
>>a number which gives you something close to an integer when
>>you divide it by 27.321661 and also when you divide it by
>>27.212220.

>Except that you've used sidereal and draconitic months in your calculation
>here.

Yikes! Thanks for catching that slip!

>Interestingly enough, the number 6585 + 1/3 is what you get for all
>three months (sidereal, synodic, draconitic), as well as for the
>anomalistic month!

Cool! That's why I didn't notice I'd slipped up - the calculation
seemed to work pretty well even though I accidentally used the
sidereal month. But redoing the calculation, it turns out to
work better with the synodic month than it did with the sidereal one.
I.e., 6585 + 1/3 days is closer to being an integral number of
synodic months than sidereal ones.

Let me see how well it works with all 4 kinds of month:

6585 + 1/3 days is (6585 + 1/3)/29.530589 = 223.000403 synodic months
and it's also (6585 + 1/3)/27.212220 = 241.999121 draconistic months
and it's also (6585 + 1/3)/27.321661 = 241.029758 sidereal months
and it's also (6585 + 1/3)/27.554549 = 238.992601 anomalistic months

Yeah, you're right, it even works pretty well with the anomalistic
month!

>This means that the Saros duplicates not only the fact
>of an eclipse (Sun at node, Moon and Sun in conjunction) but also the
>_kind_ of eclipse (annular or total in the case of solar eclipses). The
>Moon will be nearly the same distance from the Earth after one Saros period.

Yes. It seems like a rather lucky coincidence.

>It's nice to point out that these near-integers are conveniently calculated
>using continued fraction expansions.

You know, I've been reading a book called "The Mathematics of Plato's
Academy: A New Reconstruction" by D. H. Fowler, and he makes the case
that lots of Greek mathematics was secretly (or not so secretly) about
continued fraction expansions. For example, the Euclidean algorithm
to determine the greatest common divisor is based on the same idea as
continued fraction expansions: apparently the Greeks called it
"anti-hypo-hairesis", meaning "reciprocal subtraction". There's also a
nice simple proof that the golden ratio is irrational, based on the
fact that its continued fraction expansion:

1 + 1/(1 + 1/(1 + 1/(1 + 1/(1 + 1/(1 + .................... )))))

never ends. I don't know if the Greeks knew this proof, but there's
a nice geometrical version of this proof based on chopping a golden
rectangle into a square and another golden rectangle, then chopping
*that* golden rectangle into a square and a golden rectangle, etc. -
and it's so simple that I think some people argue the Greeks *had*
to know it. Anyway, I'm really curious now in how much of this
continued fraction technology was known to the Babylonians, and how
much it came from work on astronomy, and so on.

It's also amusing how important continued fraction expansions are
in the modern theory of dynamical systems - for essentially the
same reason, namely that they let us study how long we have to
wait for two different periodic motions to come within epsilon
of drifting back into phase. Chris Hillman has written cool things
about this stuff....

John Baez

unread,
Jan 5, 2002, 6:03:11 PM1/5/02
to
In article <a0onge$mr4$1...@glue.ucr.edu>, Tony Smith <tsm...@innerx.net> wrote:

>Here are some questions I have:
>
>1 - Can any realistic interacting quantum field theory in a local
> neighborhood (small enough to look Minkowski-like)
> be described by the tensor product
> hyperfinite II1 x Iinfinity ?

I don't know of any. As I said, there are pretty general theorems
in axiomatic quantum field theory saying that the algebra of observables
on any bounded open set should be a type III factor. By the way, these
axioms apply to quantum field theories on Minkowski spacetime, so
"small enough to look Minkowski-like" is not really relevant.

>2 - Didn't Stueckelberg (in Quantum Theory in Real Hilbert Space,
> Helvetica Physica Acta 33 (1960) 727-752) show that you can
> formulate quantum theory just as well in Real Hilbert Space,
> so long as there exists an operator J that acts such that
> the Real Hilbert Space is effectively isomorphic to a complex space,

That's sort of obvious, at least when you put it this way!

It amounts to saying: you can do quantum theory with a real
Hilbert space as long as you equip it with enough extra structure
that it's actually a complex Hilbert space.

More interesting is that you can do quantum theory with nothing
but a real Hilbert space. Quantum theory works fairly well or
the reals, complex and quaternions. However, only in the complex
case do we get certain familiar features such as the fact that
every one-parameter unitary group has a self-adjoint generator.
In the real and quaternionic cases, you only get a skew-adjoint
generator. This means that there is no natural way to think of
"energy", "momentum" and so on as real-valued quantities. This
is interesting: you need complex numbers in quantum theory to
make things like energy and momentum be real!

> and
> if so,
> is it necessary (or just conventional) to assume that
> the Clifford algebra of 2n x 2n matrices is represented
> by Cl(C^(2n)) of the complex vector space C^(2n),
> or could it also be represented by Cl(R^(2n)) ?

If you study the algebra generated by annihilation and creation
operators on a fermionic Fock space in real quantum mechanics, you
get a real Clifford algebra. In complex quantum mechanics, you
get a complex Clifford algebra. I haven't dared think about
quaternionic Clifford algebras yet - I should probably leave that
to Toby.

>3 - If (see 2 above) hyperfinite II1 can be written as the completion
> of the union of all Cl(R^2n) Clifford algebras,
> then,
> in light of the periodicity factorization
> Cl(N) = Cl(8) x ...(tensor N/8 times)... x Cl(8)
> can you regard the "fermionic" part hyperfinite II1 as
> being made up of a bunch of copies of Cl(8) Clifford algebras ?

Sure. There is a theory of real von Neumann algebras, and there
is a real version of the hyperfinite II_1 factor, and it is indeed
a kind of infinite tensor product of copies of Cl(8). I say "a kind
of", because one must be a little careful in defining infinite tensor
products of von Neumann algebras. Nonetheless, it can be done.

A similar story would work over the complex numbers, but we
could use Cl(C^2), since Bott periodicity has period 2 over the
complex numbers.

I should have known you'd like the hyperfinite II_1 factor.
It's probably fundamental to your theories, since it makes the
idea of "infinite-dimensional Clifford algebra" precise! Luckily,
a vast amount is known about it. Vaughan Jones basically won
the Fields medal for his work on ways of embedding the hyperfinite
II_1 factor inside itself. These turn out to be related to quantum
groups, knot invariants, topological quantum field theories, and
lots of other things.

>4 - Since Cl(8) has an 8-dim vector part, a 28-dim bivector part,
> and two 8-dim half-spinors,
> and since those 8+28+8+8 = 52 dimensions can be put together
> naturally to make the 52-dim group F4,
> and since F4 is the automorphism group of the exceptional
> Jordan algebra J3(O) of 3x3 Hermitian Octonion matrices,
> then
> can you say that the "fermionic" part hyperfinite II1 of
> a realistic quantum field theory in a Minkowski neighborhood
> has an algebraic structure related to J3(O) ?

Well, I've already shot down the "Minkowski neighborhood" business,
but that may not be so important.

There may indeed be some nice mathematical relationship between
the real hyperfinite type II_1 factor and the exceptional Jordan algebra
J_3(O), coming from what you are saying.

Unfortunately, I'm not sure how "naturally" those various
pieces of Cl(8) can be put together to form F4, so I don't really
know how nice this idea of yours is. When a mathematician says
some construction is "natural", they are really saying it's invariant
under the action of some group (or more generally category - but here
it's probably just a group). So, until we know which group, we don't
really know what they mean.

So, for me to know what you mean by "naturally", I'll need to know
the answer to this question:

What's the biggest group of automorphisms of Cl(8) which preserve
the 52-dimensional subspace you're identifying with F4, and which
act as automorphisms of that F4?

The bigger the group, the more natural your construction!

I assume the group at least contains Spin(8), which acts on Cl(8)
in an obvious way.

>5 - Can you combine the "bosonic" I_infinity part of


> a realistic quantum field theory in a consistent way with
> the construction of 4 above to get nice J3(O) algebraic
> structure of the whole realistic quantum field theory ?

I leave this as an exercise for the student. :-)


J. J. Lodder

unread,
Jan 5, 2002, 6:03:56 PM1/5/02
to
John Baez <ba...@galaxy.ucr.edu> wrote:

On the contrary, the current system op leap seconds is not sustainable.
As the earth slows down these will have to be added more often.
Something better will have to be thought of,
not in our lifetime, but on a thistorical timescale.
The error of a clock that slows down at a constant rate
will grow quadratically with time.

> By the way: can anyone explain the difference between Universal
> Time (UT) and the corrected versions UT1 and UT2?

The International Earth Rotation Service site www.iers.org
contains far more on the subject than you will probably want to know.

It is incredible how much there is to an aparently simple subject
as the rotation of the earth, at the presently available accuracy,

Jan


Jan


Jim Heckman

unread,
Jan 5, 2002, 6:05:27 PM1/5/02
to

On 4-Jan-2002, tho...@diku.dk (Lars Henrik Mathiesen) wrote:

> >Lars Henrik Mathiesen <tho...@diku.dk> wrote:
> >
> >> UT1 is corrected for a wobble in the Earth's axis (7 meter amplitude
> >> at the poles, 14 month period), which affects the UT0 time difference
> >> between two observatories.

[...]

> The total angular momentum of the Earth isn't changing at this
> timescale; but the Earth isn't a rigid body, and angular momentum can
> be transferred between different parts. The wobble is a movement of
> the rigid crust relative to the axis of the total angular momentum.

Is this wobble really due to the Earth's non-rigidity? There was a thread
here in s.p.r almost exactly n years ago (i.e., I know it was around the
Holidays, but forget which ones) about a wobble that results from a rigid
ellipsoid of revolution having its angular momentum slightly off-axis from
its inertial poles. In this case the angular momentum, and more importantly
the angular velocity, precesses uniformly around the poles in the body
frame.

--

Jim Heckman

ba...@rosencrantz.stcloudstate.edu

unread,
Jan 5, 2002, 4:54:59 PM1/5/02
to

On Fri, 4 Jan 2002, Dean Calver wrote:

> Bresenham's line algorithm is the second algorithm you learn in computer
> graphics (after pixel plotting), these days its implemented directly in
> hardware.

Yeah, that's the one I mean. The rediscovery by Bresenham is cited in the
paper by McIlroy which I cited.

Re John's nitpick--- for the record, I was joking about 250 versus 450
being "neglible" (for ancient history, of course it is not, as he
explained), and in any case it appears I misremembered the name of
Theatetus and also his dates. Actually, I probably got the basic idea of
the ancient Greek lunar calendar wrong too, but information about this
calendar seems to be hard to come by, since as I said, last time I
checked, it isn't covered in the Calendar FAQ.

John Baez

unread,
Jan 6, 2002, 6:30:15 PM1/6/02
to
In article <a12mah$fds$1...@inky.its.caltech.edu>,
Thomas Larsson <Thomas....@hdd.se> wrote:

>Thank you for a very nice explanation of von Neumann algebras.

You're welcome!

>Does anyone know why a simple von Neumann algebra is called a factor?

Just guessing: perhaps because arbitrary von Neumann algebras
can be built up from these by direct sums, which used to be called
direct products. Well, actually we need, not just direct sums, but
the more general "direct integrals". Still, there's a sense in
which

prime numbers:natural numbers::factors:von Neumann algebras.

>Type III factors (or maybe just hyperfinite ones) were classified by Vaughn
>Jones in the mid 80s.

Actually, I'm pretty sure you mean type II subfactors - i.e., the
way the hyperfinite type II factor can be embedded as a *-subalgebra
of itself. Given such an embedding A -> B we can always write

A = pBp

for some projection p, and this lets us define a relative dimension
or "index" [B;A] by

[B;A] = tr(p)

Vaughan Jones won the Fields medal for working out the allowed
values of this quantity (which are as you describe) and constructing
subfactors that take on these allowed values with the help of the
Temperley-Lieb algebra. The construction is not hard, and if I
live forever I'll certainly explain this sometime in This Week's
Finds.

Now, there is also an index for the so-called hyperfinite III_1
subfactors, which was shown by Popa to work very much like the II_1
case, but the papers by Jones that I've read seem to focus on the
II_1 case.

>In fact, as was first noticed by Rodney Baxter (as in Yang-Baxter), there is a
>connection to the four-color problem in graph theory. In this field, the
>partition function Z goes under the name dichromatic polynomial. If we
>specialize to the anti-ferromagnetic Potts model at zero temperature, i.e.

>let J -> -infinity, adjacent spins are forced to take on different values.

>In other words, we must color the map so that neighboring countries have
>different colors. In this limit Z is called the chromatic polynomial for
>q colors. It has been conjectured (maybe proved) that for generic planar
>graphs, the only zeroes of the chromatic polynomial occur at the so-called
>Beraha numbers, which are precisely those q's which are dimensions

>of type III [sub]factors. In particular, q = 1, 2, 3 are included - you


>cannot color a generic map with less than four colors. q = 4 is special:
>there are colorings, but very few compared to q >= 5, and this is probably
>why it was so hard to prove the four-color theorem.

Yes, this stuff is really cool. I wrote about the Beraha numbers
and 4-color theorem here:

........................................................................

Also available as http://math.ucr.edu/home/baez/week8.html

This Week's Finds in Mathematical Physics (Week 8)
John Baez

I was delighted to find that Louis Kauffman wants to speak at the
workshop at UCR on knots and quantum gravity; he'll be talking
on "Temperley Lieb recoupling theory and quantum invariants of links and
manifolds". His books

On knots, by Louis H. Kauffman, Princeton, N.J., Princeton University
Press, 1987 (Annals of Mathematics Studies, no. 115)

and more recently

Knots and physics, by Louis H. Kauffman, Teaneck, NJ, World Scientific
Press, 1991 (K & E Series on Knots and Everything, vol. 1)

are a lot of fun to read, and convinced me to turn my energies
towards the intersection of knot theory and mathematical physics. As
you can see by the title of the series he's editing, he is a true
believer the deep significance of knot theory. This was true even
before the Jones polynomial hit the mathematical physics scene, so he
was well-placed to discover the relationship between the Jones
polynomial (and other new knot invariants) and statistical mechanics,
which seems to be what won him his fame. He is now the editor of a
journal, "Journal of knot theory and its ramifications."

He sent me a packet of articles and preprints which I will briefly
discuss. If you read *any* of the stuff below, *please* read the
delightful reformulation of the 4-color theorem in terms of cross
products that he discovered! I am strongly tempted to assign it to my
linear algebra class for homework....

1) Map coloring and the vector cross product, by Louis Kauffman, J. Comb.
Theory B, 48 (1990) 45.

Map coloring, 1-deformed spin networks, and Turaev-Viro
invariants for 3-manifolds, by Louis Kauffman, Int. Jour. of Mod. Phys.
B, 6 (1992) 1765 - 1794.

An algebraic approach to the planar colouring problem, by Louis Kauffman
and H. Saleur, in Comm. Math. Phys. 152 (1993), 565-590.

As we all know, the usual cross product of vectors in R^3 is not
associative, so the following theorem is slightly interesting:

Theorem: Consider any two bracketings of a product of any finite number
of vectors, e.g.:

L = a x (b x ((c x d) x e) and R = ((a x b) x c) x (d x e)

Let i, j, and k be the usual canonical basis for R^3:

i = (1,0,0) j = (0,1,0) k = (0,0,1).

Then we may assign a,b,c,... values taken from {i,j,k} in such a way
that L = R and both are nonzero.

But what's really interesting is:

Meta-Theorem: The above proposition is equivalent to the 4-color
theorem. Recall that this theorem says that any map on the plane may be
colored with 4 colors in such a way that no two regions with the same
color share a border (an edge).

What I mean here is that the only way known to prove this Theorem is to
deduce it from the 4-color theorem, and conversely, any proof of this
Theorem would easily give a proof of the 4-color theorem! As you all
probably know, the 4-color theorem was a difficult conjecture that
resisted proof for about a century before succumbing to a computer-based
proof require the consideration of many, many special cases:

Every planar map is four colorable, by K. I. Appel and W. Haken, Bull.
Amer. Math. Soc. 82 (1976) 711.

So the Theorem above may be regarded as a *profoundly* subtle result
about the "associativity" of the cross product!

Of course, I hope you all rush out now and find out how this Theorem is
equivalent to the 4-color theorem. For starters, let me note that it
uses a result of Tait: first, to prove the 4-color theorem it's enough
to prove it for maps where only 3 countries meet at each vertex (since
one can stick in a little new country at each vertex), and second,
4-coloring such a map is equivalent to coloring the *edges* with 3
colors in such a way that each vertex has edges of all 3 colors
adjoining it. The 3 colors correspond to i, j, and k!

Kauffman and Saleur (the latter a physicist) come up with another algebraic
formulation of the 4-color theorem in terms of the Temperley-Lieb
algebra. The Temperley-Lieb algebra TL_n is a cute algebra with
generators e_1, ..., e_{n-1} and relations that depend on a constant d
called the "loop value":

e_i^2 = de_i
e_i e_{i+1} e_i = e_i
e_i e_{i-1} e_i = e_i
e_i e_j = e_j e_i for |i-j| > 1.

The point of it becomes clear if we draw the e_i as tangles on n
strands. Let's take n = 3 to keep life simple. Then e_1 is

\ / |
\/ |
|
/\ |
/ \ |


while e_2 is

| \ /
| \/
|
| /\
| / \

In general, e_i "folds over" the ith and (i+1)st strands. Note that if
we square e_i we get a loop - e.g., e_1 squared is

\ / |
\/ |
|
/\ |
/ \ |
\ / |
\/ |
|
/\ |
/ \ |

Here we are using the usual product of tangles (see the article "tangles"
in the collection of my expository posts, which can be obtained in a
manner described at the end of this post). Now the rule in
Temperley-Lieb land is that we can get rid of a loop if we multiply by
the loop value d; that is, the loop "equals" d. So e_1 squared is just
d times


\ / |
\/ |
|
|
|
|
|
|
/\ |
/ \ |

which - since we are doing topology - is the same as e_1. That's why
e_i^2 = de_i.

The other relations are even more obvious. For example, e_1 e_2 e_1 is
just

\ / |
\/ |
|
/\ |
/ \ |
| \ /
| \/
|
| /\
| / \
\ / |
\/ |
|

/\ |
/ \ |

which, since we are doing topology, is just e_1! Similarly, e_2 e_1 e_2
= e_1, and e_i and e_j commute if they are far enough away to keep from
running into each other.

As an exercise for combinatorists: figure out the dimension of TL_n.

Okay, very cute, one might say, but so what? Well, this algebra was
actually first discovered in statistical mechanics, when Temperley and
Lieb were solving a 2-dimensional problem:

Relations between the `percolation' and `coloring' problem and other
graph-theoretical problems associated with tregular planar lattices:
some exact results on the `percolation' problem, by H. N. V. Temperley
and E. H. Lieb, Proc. Roy. Soc. Lond. A 322 (1971), 251 - 280.

It gained a lot more fame when it appeared as the explanation
for the Jones polynomial invariant of knots - although Jones had been
using it not for knot theory, but in the study of von Neumann algebras,
and the Jones polynomial was just an unexpected spinoff. Its importance
in knot theory comes from the fact that it is a quotient of the group
algebra of the braid group (as explained in "Knots and Physics").
It has also found a lot of other applications; for example, I've used it in
my paper on quantum gravity and the algebra of tangles. So it is nice to
see that there is also a formulation of the 4-color theorem in terms of
the Temperley-Lieb algebra (which I won't present here).

[stuff deleted]

.........................................................................

This Week's Finds in Mathematical Physics - Week 22
John Baez

Lately I've been having fun in this series discussing some things that I
don't really know much about, like lattice packings of spheres. Next
week I'll get back to subjects that I actually know something about, but
today I want to talk about the 4-color theorem, the golden mean, the
silver root, knots and quantum field theory. I know a bit about *some*
of these subjects, but I've only become interested in the 4-color
theorem recently, thanks to my friend Bruce Smith, who has a
hobby of trying to prove it, and Louis Kauffman's recent work connecting
it to knot theory. The sources for what follows are:

1) The Four-Color Problem: Assault and Conquest, by Thomas L. Saaty and
Paul C. Kainen, McGraw-Hill, 1977, ISBN 0-07-054382-8.

and

2) Map coloring and the vector cross product, by Louis Kauffman, J. Comb.
Theory B, 48 (1990) 45.

Map coloring, 1-deformed spin networks, and Turaev-Viro
invariants for 3-manifolds, by Louis Kauffman, Int. Jour. of Mod. Phys.
B, 6 (1992) 1765 - 1794.

An algebraic approach to the planar colouring problem, by Louis Kauffman
and H. Saleur, Yale University preprint YCTP-P27-91, November 8, 1991.

(I discussed this work of Kauffman already in "week8," where I described
a way to reformulate the 4-color theorem as a property of the vector cross
product.)

Where to start? Well, probably back in October, 1852. When Francis
Guthrie was coloring a map of England, he wondered whether it was always
possible to color maps with only 4 colors in such a way that no two
countries (or counties!) touching with a common stretch of boundary were
given the same color. Guthrie's brother passed the question on to De
Morgan, who passed it on to students and other mathematicians, and in
1878 Cayley publicized it in the Proceedings of the London Mathematical
Society.

In just one year, Kempe was able to prove it. Whoops! In 1890
Heawood found an error in Kempe's proof. And then the real fun
starts....

But I don't want to tell the whole story leading up to how Appel and
Haken proved it in 1976 (with the help of a computer calculation
involving 10^10 operations and taking 1200 hours). I don't even
understand the structure of the Appel-Haken proof - for that, one should
probably try:

3) Every Planar Map is Four Colorable, by Kenneth Appel and Wolfgang
Haken, Contemporary Mathematics (American Mathematical Society), v. 98,
1989.

Instead, I'd like to talk about some tantalizing hints of relationships
between the 4-color problem and physics!

First, note that to prove the 4-color theorem, it suffices to consider
the case where only three countries meet at any "corner," since if more
meet, say four:

|
|
|
--------------
|
|
|

we can stick in a little country at each corner:


|
|
/ \
----- ------
\ /
|
|

so that now only three meet at each corner. If we can color the
resulting map, it's easy to check that the same coloring with the little
countries deleted gives a coloring of the original map.

Let us talk in the language of graph theory, calling the map a "graph,"
the countries "faces," their borders "edges," and the corners
"vertices." What we've basically shown is it suffices to consider
trivalent planar graphs without loops - that is, graphs on the plane
that have three edges meeting at any vertex, and never have both ends of
the same edge incident to the same vertex.

Now, it's easy to see that 4-coloring the faces of such a graph is equivalent
to 3-coloring the *edges* in such a way that no two edges incident to
the same vertex have the same color. For suppose we have a 4-coloring
of faces with colors 1, i, j, and k. Wait - you say - those don't look
like colors, they look like the quaternions. True! Now color each
edge either i, j, or k according to product of the the colors of the two
faces it is incident to, where we define products by:

1i = i1 = i 1j = j1 = j 1k = k1 = k
ij = ji = k jk = kj = i ki = ik = j.

These are *almost* the rules for multiplying quaternions, but with some
minus signs missing. Since today (October 16th, 1993) is the 150th
birthday of the quaternions, I suppose I should remind the reader what
the right signs are:

ij = -ji = k, jk = -kj = i, ki = -ik = j, i^2 = j^2 = k^2 = -1.

Anyway, I leave it to the reader to check that this
trick really gives us a 3-coloring of the edges, and conversely that a
3-coloring of the edges gives a 4-coloring of the faces.

So, we see that the edge-coloring formulation of the 4-color problem
points to some relation with the quaternions, or, pretty much the same
thing, the group SU(2)! (For what SU(2) has to do with quaternions, see
"week5".) Those wrong signs look distressing, but in the following
paper Penrose showed they weren't really so bad:

4) Applications of negative dimensional tensors, by Roger Penrose, in
Combinatorial Mathematics and its Applications, ed. D. J. A. Welsh,
Academic Press, 1971.

Namely, he showed one could count the number of ways to 3-color the
edges of a planar graph as follows. Consider all ways of labelling the
edges with the quaternions i, j, and k. For each vertex, take the
product of the quaternions at the three incident edges in
counterclockwise order and then multiply by i, getting either i or -i.
Take the product of these plus-or-minus-i's over all vertices of the
graph. And THEN sum over all labellings!

This recipe may sound complicated, but only if you haven't ever studied
statistical mechanics of lattice systems. It's exactly the same as how
one computes the "partition function" of such a system - the partition
function being the philosopher's stone of statistical mechanics, since
one can squeeze out so much information from it. (If we could compute
the partition function of water we could derive its melting point.) To
compute a partition one sums over states (labellings of edges) the
product of the exponentials of interaction energies (corresponding to
vertices). The statistical mechanics of 2-dimensional systems is
closely connected to all sorts of nice subjects like knot theory and
quantum groups, so we should suspect already that something
interesting is going on here. It's especially nice that Penrose's
formula makes sense for arbitrary trivalent graphs (although it does not
count their 3-colorings unless they're planar), and satisfies some juicy
"skein relations" reminiscent of those satisfied by the quantum group
knot invariants. Namely, we can recursively calculate Penrose's
number for any trivalent graph using the following three rules:

A. Wherever you see

\ /
\ /
\ /
|
|
|
/ \
/ \
/ \

you can replace it with

| | \ /
| | \ /
| | \ /
| | - \
| | / \
| | / \
| | / \

In other words, replace the problem of computing Penrose's number for
the original graph by the problem computing the difference of the
Penrose numbers for the two graphs with the above changes made. For
knot theory fans I should emphasize that we are talking about abstract
graphs here, not graphs in 3d space, so there's no real difference
between an "overcrossing" and an "undercrossing" - i.e., we could have
said

\ /
\ /
\ /
/
/ \
/ \
/ \

instead of

\ /
\ /
\ /
\
/ \
/ \
/ \

above, and it wouldn't matter.

B. If you do this you will start getting weird loops that have NO
vertices on them. You are allowed to dispose of such a loop if you
correct for that by multiplying by 3. (This is not magic, this is just
because there were 3 ways to color that loop!)

C. Finally, when you are down to the empty graph, use the rule that the
empty graph equals 1.

Greg Kuperberg pointed out to me that this is a case of the quantum group knot
invariant called the Yamada polynomal. This is associated to the spin-1
representation of the quantum group SU(2), and it is a polynomial in a
variable q that represents e^h, where h is Planck's constant.
But the "Penrose number" is just the value at q = 1 of the Yamada
polynomial - the "classical case" when h = 0. This makes perfect
sense if one knows about quantum group knot invariants: the factor of 3
in rule B above comes from the fact that the spin-1 representation of
SU(2) is 3-dimensional; this representation is really just another way
of talking about the vector space spanned by the quaternions i, j, and
k. Also, quantum group knot invariants fail to distinguish between
overcrossings and undercrossings when h = 0.

Now let me turn to a different but related issue. Consider the problem
of trying to color the *vertices* of a graph with n colors in such a way
that no two vertices at opposite ends of any given edge have the same
color. Let P(n) denote the number of such n-colorings. This turns out
to be a polynomial in n - it's not hard to see using recursion relations
similar to the skein relations above. It also turns out that the
4-color theorem is equivalent to saying that the vertices of any planar
graph can be 4-colored. (To see this, just use the idea of the "dual
graph" of a graph - the vertices of the one being in 1-1 correspondence with
the edges of the other.) So another way to state the 4-color theorem is
that for no planar graph does the polynomial P(n) have a root at n = 4.

P(n) is called the "chromatic polynomial" and has been intensively
investigated. One very curious thing is this. Remember the golden mean

G = (sqrt(5) + 1)/2 = 1.61803398874989484820458683437...?

Well, G + 1 is never a root of the chromatic polynomial of a graph!
(Unless the polynomial vanishes identically, which happens just when the
graph has loops.) The proof is not all that hard, and it's in Saaty and
Kainen's book. However - and here's where things get *really*
interesting - in 1965, Hall, Siry and Vanderslice figured out the
chromatic polynomial of a truncated icosahedron. (This looks like a
soccer ball or buckyball.) They found that of the four real roots that
weren't integers, one agreed with G + 1 up to 8 decimal places! Of
course, here one might think the 5-fold symmetry of the situation was
secretly playing a role. But in 1966 Barri tabulated a bunch of
chromatic polynomials in her thesis, and in 1969 Berman and Tutte
noticed that most of them had a root that agreed with G + 1 up to at
least 5 decimal places.

This curious situation was at least partially explained by Tutte in
1970. He showed that for a triangular planar graph (that is, one all
of whose faces are triangles) with n vertices one has

|P(G + 1)| <= G^{5-n}

(that little thingie is a "less then or equals" sign). This is
apparently not a *complete* explanation, though, because the truncated
icosahedron is not triangular.

This is not an isolated freak curiosity, either! In 1974 Beraha
suggested checking out the behavior of chromatic polynomials at what are
now called the "Beraha numbers"

B(n) = 4 cos^2(pi/n).

These are

B(1) = 4
B(2) = 0
B(3) = 1
B(4) = 2
B(5) = G + 1
B(6) = 3
B(7) = S

etc.. Note by the way that B(n) approaces 4 as n approaches infinity.
(What's S, you ask? Well, folks call B(7) the "silver root," a term I
find most poetic and eagerly want to spread!

S = 3.246979603717467061050009768008479621265....

If anyone knows charming properties of the silver root, I'd be
interested.) Anyway, it turns out that the roots of chromatic
polynomials seem to cluster near Beraha numbers. For example, the four
nonintegral real roots of the chromatic polynomial of the truncated
icosahedron are awfully close to B(5), B(7), B(8) and B(9). Beraha made
the following conjecture: let P_i be a sequence of chromatic polynomials
of graphs such whose number of vertices approaches infinity as i ->
infinity. Suppose r_i is a real root of P_i and suppose the r_i
approach some number x. Then x is a Beraha number.

In work in the late 60's and early 70's, Tutte proved some results
showing that there really was a deep connection between chromatic
polynomials and the Beraha numbers.

Well, to make a long story short (I'm getting tired), the Beraha numbers
*also* have a lot to do with the quantum group SU(2). This actually
goes back to some important work of Jones right before he discovered the
first of the quantum group knot polynomials, the Jones polynomial. He
found that -- pardon the jargon burst -- the Markov trace on the
Temperley-Lieb algebra is only nonnegative when the Markov parameter is
the reciprocal of a Beraha number or less than 1/4. When the
relationship of all this stuff to quantum groups became clear, people
realized that this was due to the special natural of quantum groups when
q is an nth root of unity (this winds up corresponding to the Beraha
number B(n).)

This all leads up to a paper that, unfortunately, I have not yet read,
in part because our library doesn't get this journal!

5) Zeroes of chromatic polynomials: a new approach to the Beraha
conjecture using quantum groups, by H. Saleur, Comm. Math. Phys. 132
(1990) 657.

This apparently gives a "physicist's proof" of the Beraha conjecture,
and makes use of conformal field theory, that is, quantum field theory
in 2 dimensions that is invariant under conformal transformations.

I should say more: about what quantum groups have to do with conformal
field theory and knot polynomials, about the Kauffman/Saleur translation
of the 4-color theorem into a statement about the Temperley-Lieb
algebra, etc.. But I won't! It's time for dinner. Next week, if all
goes according to plan, I'll move on to another puzzle in 2-dimensional
topology - the Andrew-Curtis conjecture - and Frank Quinn's ideas on
tackling *that* using quantum field theory.


Patrick

unread,
Jan 6, 2002, 11:51:59 PM1/6/02
to
"Richard Bullock" <richard...@ntlworld.com> wrote in message news:<tdDX7.9796$ll6.1...@news6-win.server.ntlworld.com>...

> "John Baez" <ba...@math.ucr.edu> wrote in message
> news:a0lsfp$i38$1...@glue.ucr.edu...
> > Sun (Sunday - Dies Solis)
> > Moon (Monday - Dies Lunae)
> > Mars (Tuesday - Dies Martis)
> > Mercury (Wednesday - Dies Mercurii)
> > Jupiter (Thursday - Dies Iovis)
> > Venus (Friday - Dies Veneris)
> > Saturn (Saturday - Dies Saturnis)
> >
>
> The current names of the days are mostly named after norse gods (The Vikings
> invaded us about 1200 years ago.) I think the Roman counterparts are the
> same as above though.
>
> Sunday - Sun's day
> Monday - Moon's day
> Tuesday - Tiw's day (god of War)
> Wednesday - Wooden's day (god of wisdom)
> Thursday - Thor's day (god of thunder)
> Friday - Freya's day (god of love and beauty)
> Saturday - Saturn's day
>


It's interesting to notice that whereas the English names for the
weekdays are derived from the norse gods, the French names are much
closer to the Roman expressions (see JB's list above, the connection
is very clear), except for one...

Monday -> Lundi (note: the French word for the Moon is Lune)
Tuesday -> Mardi
Wednesday -> Mercredi
Thursday -> Jeudi
Friday -> Vendredi
Saturday -> Samedi
Sunday -> Dimanche

The last one is an exception, it is not related to the French word for
the Sun (soleil). I guess it's pretty obvious to guess the origin.. It
is based on "domini" so it's the "day of God".


JB wrote:
>A nice systematic alternation, though you might why *February* gets
>picked on; this is because the earlier Roman calendar had a short
>February, and a month called Mercedonius stuck in the middle of
>February now and then.

If I recall correctly, Ferbruary was the last day of the year for the
romans, and it was considered unlucky, which is why they made it
shorter!


Patrick

Ralph E. Frost

unread,
Jan 4, 2002, 11:32:48 PM1/4/02
to

Lars Henrik Mathiesen <tho...@diku.dk> wrote in message
news:a14krh$mjs$1...@munin.diku.dk...

> "Ralph E. Frost" <ref...@dcwi.com> writes:

> >Lars Henrik Mathiesen <tho...@diku.dk> wrote:

> >> UT1 is corrected for a wobble in the Earth's axis (7 meter amplitude
> >> at the poles, 14 month period), which affects the UT0 time difference
> >> between two observatories.

> However, the wobble isn't side-to-side like your steering wheel


> analogy. The actual motion is somewhat irregular, but the best
> approximation is a circle (radius 7m). This looks a lot like the
> precession of a spinning top --- but it's not the same mechanism.

What causes it?

Are you saying the crust is, like, "breathing" -- compressing and expanding?

> If you use a coordinate grid for the Earth's surface with 90S fixed in
> the average location, then at any given time, each circle of latitude
> will actually be 7 meters low at one point, relative to the true axis
> of rotation, and 7 meters high at the opposite point --- and the low
> point will move all the way around the Earth in 14 months.

The steering wheel notion was sort of a cross-sectional view looking in the
direction of travel along the Earth orbit line. Thanks to you added
description, I guess I can see that that the wobbling circles about the
planet in the 1-1/6 orbit. It seems like if one had "hairs" going outward
from lots of points on the coordinate system, then these long fibers would
likely wave rythmically to show this secondary orbital --- akin to what
seaweed looks like at certain points in tide cycle.

> But that's not why UT1 needs compensation --- it's because the lines
> of longitude are skewed too. If the axis of rotation is seven meters
> off towards Canada, meridian passages at Greenwich and in New Zealand
> will be retarded by a few milliseconds (if my math is right), and
> those in Japan and South Africa will be advanced about the same.

So, like a squeezing a rubber ball? Are there any computer graphic
simulations of this action -- sped up, of course?

Thank you.


Adam Russell

unread,
Jan 7, 2002, 12:49:39 AM1/7/02
to

I think it curious that there might be different calendars but they all have
7 days in the week. What is a week based on? The only thing I am aware of
corresponding is the Jewish bible, but probably most people could care less
about ordering the calendar after a holy book.

[Moderator's note: Followups have been set to groups other than
s.p.r., since the subject is drifting further from physics. - jb]


Merlin

unread,
Jan 7, 2002, 1:22:32 AM1/7/02
to

"Patrick"

> Saturday -> Samedi
> Sunday -> Dimanche
> The last one is an exception, it is not related to the French word for
> the Sun (soleil). I guess it's pretty obvious to guess the origin.. It
> is based on "domini" so it's the "day of God".

Dimanche is Dominica dies, and Samedi is
Sabbati dies, (Sabbath day).

______
J. Merlin

John Baez

unread,
Jan 8, 2002, 5:39:56 PM1/8/02
to
In article <u3dig6n...@news.supernews.com>,
Jim Heckman <jamesr...@abcyahoo.xyzcom> wrote:

>On 4-Jan-2002, tho...@diku.dk (Lars Henrik Mathiesen) wrote:

>> >Lars Henrik Mathiesen <tho...@diku.dk> wrote:

>> >> UT1 is corrected for a wobble in the Earth's axis (7 meter amplitude
>> >> at the poles, 14 month period), which affects the UT0 time difference
>> >> between two observatories.

>> The total angular momentum of the Earth isn't changing at this


>> timescale; but the Earth isn't a rigid body, and angular momentum can
>> be transferred between different parts. The wobble is a movement of
>> the rigid crust relative to the axis of the total angular momentum.

>Is this wobble really due to the Earth's non-rigidity? There was a thread
>here in s.p.r almost exactly n years ago (i.e., I know it was around the
>Holidays, but forget which ones) about a wobble that results from a rigid
>ellipsoid of revolution having its angular momentum slightly off-axis from
>its inertial poles. In this case the angular momentum, and more importantly
>the angular velocity, precesses uniformly around the poles in the body
>frame.

Hmm... the wobble we were discussing in that thread has period
427 days and amplitude 15 feet (about 5 meters). These are not
quite the same as the figures cited above, but they're suspiciously
close! In any event, the earth is afflicted by many wobbles, all
of which must be understood by the poor folks who cling to the
rotation of this rock as a method of reckoning time.

That earlier thread about the wobbling of the earth was packed
with fun facts. It has been immortalized here:

http://math.ucr.edu/home/baez/wobble.html

Tony Smith

unread,
Jan 8, 2002, 2:00:57 PM1/8/02
to ba...@math.ucr.edu

John Baez, in a post
to s.p.r. Re: This Week's Finds in Mathematical Physics (Week 175)
said:
"... If you study the algebra generated by annihilation and creation

operators on a fermionic Fock space in real quantum mechanics,
you get a real Clifford algebra.
In complex quantum mechanics, you get a complex Clifford algebra.
I haven't dared think about quaternionic Clifford algebras yet -
I should probably leave that to Toby. ...".


Some of my confused thinking in that area is due to the fact
that there are
two ways of think of a Clifford algebra as being Real, Complex, ... etc.

1 - (ignoring signature) you can say that a Clifford algebra Cl(n,F)
is defined over an n-dimensional vector space with scalar field F.

For the real case, Cl(n,R) Clifford algebras have 8-fold periodicity.

For the complex case, Cl(n,C) Clifford algebras have 2-fold periodicity.

Since the Quaternions Q are not a field (not commutative),
there is no (conventionally defined) Cl(n,Q) quaternion Clifford algebra.

2 - (not ignoring signature, and dealing only Cl(p,q,R) Clifford algebras
with real scalar field R, p negative signatures, q positive signatures)
the Cl(p,q,R) Clifford algebras are matrix algebras (or sums of two of
them) with either real R, complex C, or quaternion Q coefficients,
appearing as an 8-periodic square array:

q 0 1 2 3 4 5 6 7
p
0 R C Q 2Q Q(2) C(4) R(8) 2R(8)
1 2R R(2) C(2) Q(2) 2Q(2) Q(4) C(8) R(16)
2 R(2) 2R(2) R(4) C(4) Q(4) 2Q(4) Q(8) C(16)
3 C(2) R(4) 2R(4) R(8) C(8) Q(8) 2Q(8) Q(16)
4 Q(2) C(4) R(8) 2R(8) R(16) C(16) Q(16) 2Q(16)
5 2Q(2) Q(4) C(8) R(16) 2R(16) R(32) C(32) Q(32)
6 Q(4) 2Q(4) Q(8) C(16) R(32) 2R(32) R(64) C(64)
7 C(8) Q(8) 2Q(8) Q(16) C(32) R(64) 2R(64) R(128)

My confusion is that I don't know whether the terms
"complex Clifford algebra", etc
in the context of
"complex quantum mechanics", etc
refer to
Cl(n,C) (a Clifford algebra with complex scalar field C), etc
or to
a real Clifford algebra Cl(p,q,R) that is isomorphic to
a matrix algebra (or pair of them) with complex, etc, coefficients.


If it means Cl(n,C), then the periodicity is 2 for complex.

If it means Cl(p,q,R), then you can have periodicity 8 with
matrix coefficients as either R, C, or Q.

For example, consider the following table made from diagonals
of the above 8x8 table repeated (the * refers to a Clifford algebra
that has a tensor product of R(16), and ** refers to two such tensorings):

Periodicity diagonals of Cl(p,q,R) real Clifford algebras
with matrix entries of types R, C, and Q (and another Q - see note below):

k 0 1 2 3
p,q

0,k R C Q 2Q
1,k+1 R(2) C(2) Q(2) 2Q(2)
2,k+2 R(4) C(4) Q(4) 2Q(4)
3,k+3 R(8) C(8) Q(8) 2Q(8)
4,k+4 R(16) C(16) Q(16) 2Q(16)
5,k+5 R(32) C(32) Q(32) *2Q(2)
6,k+6 R(64) C(64) *Q(4) *2Q(4)
7,k+7 R(128) *C(8) *Q(8) *2Q(8)
8,k+8 **R **C **Q **2Q

(Note that Cl(1,3) of Minkowski space has quaternionic structure Q(2),
which is (IIRC, as John Baez has discussed elsewhere) nicely consistent
with quaternion spinors as physical fermions.)

(Note also that I have put in a column for k = 3 that seems
to look like a column for pairs of quaternions.
I put that column in because I like octonions,
and although Cl(0,3) = 2Q is not the octonions,
it is closely related to them and a pair-of-quaternion column
is as close as I can get to an octonion column.)

==================================================================

Also,about my construction of F4 from Cl(8),
where Cl(8) is graded as 1 8 28 56 70 56 28 8 1
with vector and bivector 8 28
and
has spinor of dim sqrt(256) = 16,
so that
F4 = 28 + 8 + 16

John Baez said,
"... I'm not sure how "naturally" those various pieces of Cl(8)


can be put together to form F4,

so I don't really know how nice this idea of yours is.

When a mathematician says some construction is "natural",
they are really saying it's invariant under the action of some group
(or more generally category - but here it's probably just a group).

... So, for me to know what you mean by "naturally",


I'll need to know
the answer to this question:
What's the biggest group of automorphisms of Cl(8)
which preserve the 52-dimensional subspace you're identifying with F4,

and which act as automorphisms of that F4? ...".


What John says pretty conclusively rules me out as being a mathematician,
because I do not restrict the meaning of "natural" to
"invariant under the action of some group (or ... category ...)".

What I mean by "natural" in the above construction is this:

The 28 bivectors of Cl(8) close under a bracket product
to form the Lie algebra Spin(8).

Now add the 8 vectors of Cl(8) to the 28 bivectors.
I think that you can define a product such that those
8+28 = 36 elements close to form the Lie algebra Spin(9).
Here is how:
The graded structure of Cl(9) is
1 9 36 84 126 126 84 36 9 1
and
it is derived from the Cl(8) graded structure like two
rows of the binomial triangle:
Cl(8) 1 8 28 56 70 56 28 8 1
Cl(9) 1 9 36 84 126 126 84 36 9 1

The 36 bivectors of Cl(9) are made up
of the 8 vectors of Cl(8) and the 28 bivectors of Cl(8),
and
the product I want is defined by the bivector bracket product of Cl(9),
which closes to produce the Lie algebra Spin(9).

Note that the symmetric space Spin(9) / Spin(8) = OP1 = S8
that is, the 8-sphere or projective octonion 1-space.


Now, I add the 16 spinors (8 +half-spinors and 8 -half-spinors) to
the 28+8 = 36, to get 28+8+16 = 36+16 = 52 elements of Cl(8), and
I think that you can define a product such that those 52 elements close
to form the Lie algebra F4.

Such a product is related to the symmetric space F4 / Spin(9) = OP2,
which is 52-36 = 16-dim and is the octonion projective plane.

Since I am not using the Clifford product, but am using a
"new" product (like using brackets of bivectors to make Lie algebras),
I do not think that my construction is "natural" in the sense
of mathematicians,
but
I do think that it is natural in the sense of a layman trying
to make a physics model that is a realistic representation of
experimental observations.

=================================================================

John also said: "... the hyperfinite II_1 factor ... makes the


idea of "infinite-dimensional Clifford algebra" precise!
Luckily, a vast amount is known about it.

Vaughan Jones basically won the Fields medal for his work


on ways of embedding the hyperfinite II_1 factor inside itself.

These turn out to be related to quantum groups, knot invariants,

topological quantum field theories, and lots of other things. ...".

Looking at "... ways of
embedding the hyperfinite II_1 factor inside itself ..."
is a point of view that sounds very interestingly reflexive/recursive.

The relationship of those embeddings to Clifford algebra
periodicity factorizations is something that I would like to understand.

I will try to read up on that work of Vaughan Jones.

Thanks very much for mentioning it.

What references would you recommend ?

Tony 8 Jan 2002


John Baez

unread,
Jan 9, 2002, 6:34:28 PM1/9/02
to
In article <l03102800b860c9212ea2@[209.246.183.61]>,
Tony Smith <tsm...@innerx.net> wrote:

>My confusion is that I don't know whether the terms
>"complex Clifford algebra", etc
>in the context of
>"complex quantum mechanics", etc
>refer to
>Cl(n,C) (a Clifford algebra with complex scalar field C), etc
>or to
>a real Clifford algebra Cl(p,q,R) that is isomorphic to
>a matrix algebra (or pair of them) with complex, etc, coefficients.

Indeed - and I guess this confusion requires a bit of thought
to fully unravel, since while standard quantum theory always
seems to use a complex Hilbert space of states, it studies
not only complex but also real (aka "Majorana") spinor fields -
and the latter, especially, are best understood with the help
of the real Clifford algebras Cl(p,q,R).

>John Baez wrote:

>>When a mathematician says some construction is "natural",
>>they are really saying it's invariant under the action of some group
>>(or more generally category - but here it's probably just a group).
>>... So, for me to know what you mean by "naturally",
>>I'll need to know
>>the answer to this question:
>>What's the biggest group of automorphisms of Cl(8)
>>which preserve the 52-dimensional subspace you're identifying with F4,
>>and which act as automorphisms of that F4? ...".

>What John says pretty conclusively rules me out as being a mathematician,
>because I do not restrict the meaning of "natural" to
>"invariant under the action of some group (or ... category ...)".

Yes, you're acting like a normal person, to whom something is
"natural" if it comes spontaneously, without great effort. So
if I ask you in what sense your construction is natural, you say:

"Look here: you just take this Clifford algebra, and boil it,
and slice it up like this, and fry the slices lightly in oil,
and put them in the oven for half an hour, and pull them out
when they're golden brown, and - lo and behold! - you've got
the exceptional Lie algebra F4. What could be more natural than that?"

To which I reply, inwardly at least: "Sigh... you're not telling
me what I'd really love to know, you're just telling me a recipe
and saying it seems natural to you. So I'll just have to master
the recipe myself to figure out the answer to my question,
namely: under what symmetry group is this construction invariant?"

The fact that the symmetry of a recipe can be as important
as the recipe itself is mentioned in Lewis Carroll's
"The Hunting of the Snark":

"You boil it in sawdust: you salt it in glue:
You condense it with locusts and tape:
Still keeping one principal object in view -
To preserve its symmetrical shape."

>Since I am not using the Clifford product, but am using a
>"new" product (like using brackets of bivectors to make Lie algebras),
>I do not think that my construction is "natural" in the sense
>of mathematicians,

Oh, it *must* be invariant under *some* symmetry group, or I
bet you would not find it so charming! Unnatural activities
require lots of arbitrary choices which destroy all symmetry.
When something seems natural and pretty, there is usually a big
group (or category!) of symmetries lurking around. This is why
the mathematical concept of naturality is a good formalization
of the naive concept.

>I will try to read up on that work of Vaughan Jones.
>
>Thanks very much for mentioning it.
>
>What references would you recommend ?

Hmm. I don't know the perfect sugarcoated reference that would
ease the somewhat bitter taste that operator algebras have when
one first encounters them. I'd try these:

V. Jones, Subfactors and knots, Providence, R.I. : Published
for the Conference Board of Mathematical Sciences by the American
Mathematical Society with support from the National Science Foundation,
1991.

V. Jones and V.S. Sunder, Introduction to subfactors,
Cambridge University Press, Cambridge, 1997.

The subject in question is fairly vast, and includes not only
subfactors and knot theory, but also quantum groups, loop
groups, affine Lie algebras, conformal field theory, completely
integrable statistical mechanics problems, 3d topological quantum
field theory, and a hefty hunk of category theory. In a sense
these are all different ways of talking about the same thing! -
and I've been spending a lot of the last decade trying to understand
this thing.

Jim Heckman

unread,
Jan 9, 2002, 9:57:36 PM1/9/02
to

On 8-Jan-2002, ba...@galaxy.ucr.edu (John Baez) wrote:

> In article <u3dig6n...@news.supernews.com>,
> Jim Heckman <jamesr...@abcyahoo.xyzcom> wrote:
>
> >On 4-Jan-2002, tho...@diku.dk (Lars Henrik Mathiesen) wrote:
>
> >> >Lars Henrik Mathiesen <tho...@diku.dk> wrote:
>
> >> >> UT1 is corrected for a wobble in the Earth's axis (7 meter amplitude
> >> >> at the poles, 14 month period), which affects the UT0 time difference
> >> >> between two observatories.
>
> >> The total angular momentum of the Earth isn't changing at this
> >> timescale; but the Earth isn't a rigid body, and angular momentum can
> >> be transferred between different parts. The wobble is a movement of
> >> the rigid crust relative to the axis of the total angular momentum.
>
> >Is this wobble really due to the Earth's non-rigidity? There was a thread
> >here in s.p.r almost exactly n years ago (i.e., I know it was around the
> >Holidays, but forget which ones) about a wobble that results from a rigid
> >ellipsoid of revolution having its angular momentum slightly off-axis from
> >its inertial poles. In this case the angular momentum, and more importantly
> >the angular velocity, precesses uniformly around the poles in the body
> >frame.
>
> Hmm... the wobble we were discussing in that thread has period
> 427 days and amplitude 15 feet (about 5 meters). These are not
> quite the same as the figures cited above, but they're suspiciously
> close! In any event, the earth is afflicted by many wobbles, all

Exactly. It's that "suspiciously close"-ness that leads me to wonder if the
Earth's non-rigidity is really the major contributing factor to the wobble that Dr.
Mathiesen is talking about.

> of which must be understood by the poor folks who cling to the
> rotation of this rock as a method of reckoning time.
>
> That earlier thread about the wobbling of the earth was packed
> with fun facts. It has been immortalized here:
>
> http://math.ucr.edu/home/baez/wobble.html

Ah yes, that takes me back! But I notice you failed to include my later post
where I actually did the full calculations for the off-axis angular momentum,
from intuitionally motivated first principles all the way down to its effect on the
stars' azimuths and elevations. :-( Anyone who wants to see that much detail
can check Google for:

From: Jim Heckman <jhec...@my-deja.com>
Subject: Re: The wobbling of the earth - a puzzle
Date: 2000/01/03
Message-ID: <84far0$ahe$1...@nnrp1.deja.com>

--

Jim Heckman

Lars Henrik Mathiesen

unread,
Jan 9, 2002, 10:39:28 PM1/9/02
to
[The newsserver that I use was down for a few days --- so this answer
is a bit late].

ba...@galaxy.ucr.edu (John Baez) writes:

I just went and read the thread, and found this quote in wob13.html:

Getting back to the wobble, Grossman's _Sheer Joy of Celestial
Mechanics_, gives a figure of 303 days for the so-called Euler free
period of the Earth (i.e., what we've been talking about). Then he
says this:

It is found that a single periodic term does not suffice for
representing the empirical variations in latitude. Two periodic
terms suffice to a high degree of satisfaction. The first has
amplitude 0".09 and period about one year. The second-- the
*Chandler wobble* -- has amplitude 0".18 and period about 14
months. There are, in fact, torques acting on the Earth. Among
the causes of external forces and their torques are changes in
the atmosphere from seasonal and other activities; tidal forces
from water on the surface of the Earth; and forces within the
Earth from shifting of crustal plates and viscous sloshing of a
molten core.

The websites that I found originally, only state that UT1 is corrected
for the Chandler wobble. It's possible that they glossed over the
difference between the two components mentioned here, and that both
are actually corrected for --- I would go digging through the IERS
website if I wanted to know.

The figure of 427 days does match the Chandler wobble pretty well, but
Grossman seems to be saying that that motion is not purely the result
of the angular velocity/moment of inertia major axis mismatch.

On the other hand, he doesn't say that it's all because of non-rigid
movements --- I may have over-interpreted a website on that.

According to the quote, the mismatch should actually give a component
with a 303 day period, which is not found as such in the (heuristic)
"highly satisfying" approximation. I don't have the math to see if
it's possible for the axes of velocity and inertia to be misaligned by
.15 seconds of arc (15 feet), or even the full .18", but the 303 day
precession merging with other pertubations to cause the motion seen.

Anyway, that's all I know about the subject.

Danny Ross Lunsford

unread,
Jan 10, 2002, 2:37:09 AM1/10/02
to
Tony Smith wrote:

> 2 - (not ignoring signature, and dealing only Cl(p,q,R) Clifford algebras
> with real scalar field R, p negative signatures, q positive signatures)
> the Cl(p,q,R) Clifford algebras are matrix algebras (or sums of two of
> them) with either real R, complex C, or quaternion Q coefficients,
> appearing as an 8-periodic square array:
>
> q 0 1 2 3 4 5 6 7
> p
> 0 R C Q 2Q Q(2) C(4) R(8) 2R(8)
> 1 2R R(2) C(2) Q(2) 2Q(2) Q(4) C(8) R(16)
> 2 R(2) 2R(2) R(4) C(4) Q(4) 2Q(4) Q(8) C(16)
> 3 C(2) R(4) 2R(4) R(8) C(8) Q(8) 2Q(8) Q(16)
> 4 Q(2) C(4) R(8) 2R(8) R(16) C(16) Q(16) 2Q(16)
> 5 2Q(2) Q(4) C(8) R(16) 2R(16) R(32) C(32) Q(32)
> 6 Q(4) 2Q(4) Q(8) C(16) R(32) 2R(32) R(64) C(64)
> 7 C(8) Q(8) 2Q(8) Q(16) C(32) R(64) 2R(64) R(128)

This table is one of the most beautiful things in all math! I printed it out
and put it on my wall, with blue Rs, green Cs, and red Qs. Goes nicely with
my Mardi-Gras poster!

The wonderful thing about this table is that it is not symmetric! This is
understandable formally if one starts at the bottom and recursively
constructs all the algebras - it matters if your squares are +1 or -1. But I
can't understand it intuitively - can you? Can you see it physically? I was
thinking of the light cone perhaps really being a "light hyperboloid" which
is asymptotic to the light cone and indistiguishable from it outside atomic
distances; then one has a choice of limiting procedures by which this
hyperboloid flattens down onto the cone. The spatial analogy is a
hyperboloid of one vs. two sheets, that is

x^2 + y^2 - z^1 = 1 (1 sheet) vs. -x^2 -y^2 + z^2 = 1 (2 sheets)

-drl


Lars Henrik Mathiesen

unread,
Jan 10, 2002, 4:50:39 PM1/10/02
to
"Ralph E. Frost" <ref...@dcwi.com> writes:
>Lars Henrik Mathiesen <tho...@diku.dk> wrote in message
>news:a14krh$mjs$1...@munin.diku.dk...

>> However, the wobble isn't side-to-side like your steering wheel
>> analogy. The actual motion is somewhat irregular, but the best
>> approximation is a circle (radius 7m). This looks a lot like the
>> precession of a spinning top --- but it's not the same mechanism.

>What causes it?

>Are you saying the crust is, like, "breathing" -- compressing and expanding?

Hmm, I totally lost you there. (And in the next part that I snipped).

I think that my intuitions about physical objects are different from
yours, enough that I can't usefully explain this to you over a text
medium. And I don't know of a website with visuals, either.

Sorry I can't help.

Toby Bartels

unread,
Jan 12, 2002, 3:31:06 PM1/12/02
to
Danny Ross Lunsford wrote about
the antisymmetry of Cl(p,q,\R) in (p,q):

>I can't understand it intuitively - can you? Can you see it physically? I was
>thinking of the light cone perhaps really being a "light hyperboloid" which
>is asymptotic to the light cone and indistiguishable from it outside atomic
>distances; then one has a choice of limiting procedures by which this
>hyperboloid flattens down onto the cone. The spatial analogy is a
>hyperboloid of one vs. two sheets, that is

>x^2 + y^2 - z^1 = 1 (1 sheet) vs. -x^2 -y^2 + z^2 = 1 (2 sheets)

So the 1 on the RHS of both equations
is on the order of the Planck area in your visualisation?

I wonder what would be the physical consequences of such light cones.
The 2 sheet one wouldn't work, since no causal effect could cross the gap.
The 1 sheet hyperboloid, OTOH, suggests that
things can be effected by spatially separated events,
so long as they are with a Planck length of the thing.


-- Toby
to...@math.ucr.edu

Tony Smith

unread,
Jan 11, 2002, 1:19:39 AM1/11/02
to ba...@math.ucr.edu

Adrian Ocneanu said, in his article
Quantized Groups, String Algebras and Galois Theory for Algebras,
in Operator Algebras and Applications, Volume 2, pages 119-172,
edited by David E. Evans and Masamichi Takesaki (Cambridge 1988):

"... The algebra ... is ... R, or the hyperfinite II1 factor ...
also called ... the elementary von Neumann algebra.
The algebra R ... is the weak closure of the Clifford algebra of
the real separable Hilbert space, is a factor ... which has very
many symmetries ...

... A ... theorem of Connes implies that any closed subalgebra of R
which is a factor ... is isomorphic either to MatnC or to R itself.
Thus any finite index subfactor N of R is isomorphic to R,
and
all the information in the inclusion N in R comes from the
relative position of N in R and not from the structure of N. ...
... in our context this guarantees that the closure of all finite
dimensional constructions done below will us back to R. ...

... for subfactors of finite Jones index, finite depth
and scalar centralizer of ... R ... In index less than 4 ...
the conjugacy classes are rigid:
... axioms eliminate one connection for each Dn and
the pair of connections on E7.
Thus
there is one subfactor for each diagram An,
one for each diagram D2n,
and a pair of opposite conjugate but nonconjugate subfactors
for each diagram E6 and E8. ...
... there is
a crystal-like rigidity of the position of subfactors of R. ...".


To me this means that R, or hyperfinite II1,
contains subfactors for each of the A-D-E classification
except odd D and E7 (Ocneanu says that is because they
are not geometrically flat).

As Y. Ito and I. Nakamura say in their paper at
http://www.maths.warwick.ac.uk/~miles/Warwick_EConf/Draft2/Nakamura.ps

"... The ADE Dynkin diagrams provide a
classification of the following types of objects (among others):
(a) simple singularities (rational double points) of complex surfaces ... ,
(b) finite subgroups of SL(2,C),
(c) simple Lie groups and simple Lie algebras ... ,
(d) quivers of finite type ... ,
(e) modular invariant partition functions in two dimension ... ,
(f) pairs of von Neumann algebras of type II1 ...".


Therefore, through A-D-E correspondences,
it looks to me as though R (hyperfinite II1) might be
thought of as a way to represent most of (except odd D and E7):
the simple singularities;
the finite (McKay) subgroups of SL(2,C);
the simple Lie groups and simple Lie algebras; and
modular invariant partition functions in two dimensions (which
I guess would be such physical things as the Ising model, etc).

Since Ocneanu says that R is "... the weak closure of the Clifford algebra
of the real separable Hilbert space ...",
it sounds to me as though R (hyperfinite II1) is a limit of a sum
of real Clifford algebras, and so therefore should have some
sort of aspect of 8-periodicity
(although I still am not sure how to deal with R being a sum
and Cl(n) for large n being a tensor product of n/8 copies of Cl(8)).


As to the periodicity,
I note that Vaughan Jones and Henri Moscovici say, in their
Review of Noncommutative Geometry
by Alain Connes (Notices of the AMS 44 (August 1997) 792-799).

"... If one had to find a single word to sum up ... the work of Connes
... it would be the word "automorphisms". ...
... Connes began a penetrating study of automorphisms of type II factors
... The first step was was to classify periodic automorphisms of
the hyperfinite II1 factor R ...
... Connes ...[introduced]... a new approach to hyperfiniteness:
injectivity of the von Neumann algebra M, meaning that M has
a Banach space complement in B(H). ... The result [was]

(injective) <=> (hyperfinite) ...

... in 1981 ... Connes ...[discovered]... cyclic cohomology ...
... with a crucial periodicity operator ...
... the periodic continuous cyclic cohomology of the algebra C^infinity (M),
equipped with its Frechet space topology, ...
...[is identified]... with the de Rham homology of M ...".


I haven't yet found any more details that will help me to
compare the two kinds of periodicity.


It is interesting to me that John Baez's adviser Irving Segal
reviewed Noncommutative Geometry by Connes
(Bull. AMS 33 (1996) 459-465), and said:

"... In the mid and later '40s, when I [Segal] undertook to
merge the von Neumann [W*] and Gelfand [C*] outlooks and
set up the modern notions of state and observable,
this work was thought quite eccentric by many. ... But
today it seems hardly more controversial than euclidean geometry. ...

... Today the divergences in quantum field theory appear as
probable consequences of an oversimple space-time geometry.
In the Einstein universe R1 x S3 ... the divergences are absent
in the crucial case of quantum electrodynamics ...[ I. Segal and
Z. Shou, Convergence of quantum electrodynamics in a curved
deformation of Minkowski space, Ann. Phys. 232 (1994) 61-87.
MR95c:81874 ]... and in all likelihood in electroweak theory as well. ...

... The book presents a sophisticated mathematical reformulation
of the standard model ... It is exemplary for the author to apply
his advanced mathematical expertise to theoretical physics;
higher mathematics has become all too introverted and needs
the challenge of external connections. However, the interpretation
given is basically descriptive rather than explanatory.
No resolution for the long-standing basic question of
why the electromagnetic interaction is invariant under space inversion
while the weak interactions are maximally noninvariant is proposed.
More broadly, the eforts toward issues in mathematical physics
seem to ignore some rather basic physical principles -
- causality, stability (effectively, positivity of the energy), etc -
- in favor of a quick technological fix. This appears to represent
a kind of naivite that contrasts oddly with very high level
of mathematical knowledgeability. ...".

I find Segal's comments fascinating.

Tony 11 Jan 2002


Tony Smith

unread,
Jan 12, 2002, 2:27:07 PM1/12/02
to
In their book Coxeter Graphs and Towers of Algebras (Springer 1989),
Goodman, de la Harpe, and Jones say, at pages 222-224:

"... We realize the hyperfinite II1 factor R as the completion,
with respect to the unique tracial state tr,
of the infinite tensor product of Mat2(C),

R = ( x(infinite) Mat2(C)) )-.

Any closed subgroup G of SU(2) acts on x(infinite) Mat2(C) by the
infinite tensor product of its action by conjugation on Mat2(C).
The action preserves the trace, so extends to an action on R.
...
We can now use the McKay correspondence between finite subgroups
of SU(2) and affine Coxeter graphs ... to calculate the Bratelli
diagrams or principal graphs when G is finite. ...".

Although I don't see where Goodman, de la Harpe, and Jones explicitly
refer to Clifford algebras, it seems to me that
their tensor product of factors of the form Mat2(C)
must be based on the complex-Clifford-periodicity of order 2,
in which the tensor factorization is
Cl(2n,C) = Cl(2,C) x ...(n times tensor)... x Cl(2,C)
and
since Cl(2n,C) = Mat2^n(C) and Cl(2,C) = Mat2(C)
it can be written as
Cl(2n,C) = Mat2^n(C) = Mat2(C) x ...(n times tensor)... x Mat2(C)


That leads me to think that my physics model is basically similar
to hyperfinite II1, but with real instead of complex structure,
as follows:

My physics model uses the real-Clifford-periodicity of order 8,
in which the tensor factorization is
Cl(8n,R) = Cl(8,R) x ...(n times tensor)... x Cl(8,R)
and
since Cl(8n,R) = Mat16^n(R) and Cl(8,R) = Mat16(R)
it can be written as
Cl(8n,R) = Mat16^n(R) = Mat16(R) x ...(n times tensor)... x Mat16(R)


I would realize the real-hyperfinite II1 factor Rreal as the completion
of the infinite tensor product of Mat16(R),

Rreal = ( x(infinite) Mat16(R)) )-
and
note that, similar to the action of SU(2) on Mat2(C),
there is a natural action of Spin(16) on Mat16(R),
so that
any closed subgroup G of Spin(16) acts on x(infinite) Mat16(R) by the
infinite tensor product of its action by conjugation on Mat16(R).

Although I don't know offhand of a generalization of the McKay
correspondence from SU(2) subgroups to Spin(16) subgroups,
I do know that SU(2) = Spin(3) is a nice subgroup of Spin(16),
so it seems that the subgroups of Spin(3) would extend to
give subgroups of Spin(16) and I could then get the McKay results
of A-D-E classification for the subgroups of Spin(16).

-------------------------------------------------------

One technical point that puzzles me here is that
Goodman, de la Harpe, and Jones get a full A-D-E correspondence,
including all D and E7,
whereas
Ocneanu, in his article Quantized Groups, String Algebras and Galois Theory


for Algebras, in Operator Algebras and Applications, Volume 2,

edited by David E. Evans and Masamichi Takesaki (Cambridge 1988),
says


that R, or hyperfinite II1, contains subfactors

for each of the A-D-E classification except odd D and E7,
because odd D and E7 are not geometrically flat.

Is Ocneanu therefore contradicting Goodman, de la Harpe, and Jones
with respect to odd D and E7 ?????

-------------------------------------------------------

Some other points that seem interesting to me are:

The Spin(16) symmetry of the Mat16(R) components of
Rreal = ( x(infinite) Mat16(R)) )-
has several interesting features:

1 - Spin(16) / Spin(15) = S15 which has Hopf fibration S7 -> S15 -> S8,
and the S7 has Hopf fibration S3 -> S7 -> S4,
and the S3 = SU(2) = Spin(3) has Hopf fibration S1 -> S3 -> S2.

2 - adjoint(Spin(16)) plus half-spinor(Spin(16)) = E8
120-dim plus 128-dim = 248-dim

3 - the 28-dim Spin(8) bivectors of Cl(8,R) form a subgroup of Spin(16)

4 - (question) Are the 8-dim vectors of Cl(8,R) the 8 elements of Cl(8,R)
that are NOT represented by 120-dim Spin(16) or its 128-dim half-spinors ?
In other words, can 256-dim Cl(8,R) be represented by
8-dim vectors plus 248-dim E8 ?????

5 - (question) Recall that the 128-dim half-spinors of Spin(16)
correspond to the 128-dim full-spinors of Spin(15)
and to the 64-dim +half-spinors of Spin(14) plus
the 64-dim -half-spinors of Spin(14).
Can the 8-dim +half-spinors of Cl(8,R) be regarded as
a row (or column, or minimal ideal) of
the 64-dim +half-spinors of Spin(14), and therefore
also as part of the 128-dim half-spinros of Spin(16) ?
Can the 8-dim -half-spinors of Cl(8,R) be regarded as
a row (or column, or minimal ideal) of
the 64-dim -half-spinors of Spin(14), and therefore
also as part of the 128-dim half-spinros of Spin(16) ?

6 - (question) could 3, 4, and 5 above give a "natural" (in your
mathematician sense) symmetry of my 8 vectors, 28 bivectors,
and 8+8 spinors of Cl(8,R) that I like to identify with F4,
all in terms of the symmetry group E8,
which includes Spin(16) and its half-spinors ?


Tony 12 Jan 2002


PS - If my questions/comments in this area are getting too weird and
complicated, please let me know and I will quit sending them to you.

However, whatever you want to do about corresponding in detail,
I do want to say thanks very much for bringing this stuff to my attention,
because I am having fun playing with it, and also because I have
never really understood what was going on in such books as
the Goodman, de la Harpe, and Jones book, which I bought years ago
because it was a MSRI book, and my attempts (until now) at reading
it resulted in nothing but uncomprenheding MiSeRy.
Now at last I have some faint glimmers of partial comprehension,
and even think that it shows me the overall large-scale algebraic
structure of my physics model, in which I see each of
the Mat16(R) components of Rreal = ( x(infinite) Mat16(R)) )-
as describing local physics in one small-point-neighborhood-node of
what might be called a pre(with respect to spacetime)geometric foam.


Danny Ross Lunsford

unread,
Jan 14, 2002, 7:28:48 PM1/14/02
to
Toby Bartels wrote:

> I wonder what would be the physical consequences of such light cones.
> The 2 sheet one wouldn't work, since no causal effect could cross the gap.
> The 1 sheet hyperboloid, OTOH, suggests that
> things can be effected by spatially separated events,
> so long as they are with a Planck length of the thing.

Exactly - and then the need for virtual particles goes away. Things are
always on shell, only there are a lot of shells, separated from each other
by the Planck length. Alternatively, one deals with a "light cone" in
anti-deSitter space

x^2 + y^2 + z^2 - t^2 - u^2 = 0

with fields that are always on-shell there, but which project down onto 4
dimensions apparently off-shell.

-drl

John Baez

unread,
Jan 18, 2002, 7:13:58 PM1/18/02
to
In article <l03102800b8642213b0f2@[209.246.182.31]>,
Tony Smith <tsm...@innerx.net> wrote:

>Adrian Ocneanu said, in his article
>Quantized Groups, String Algebras and Galois Theory for Algebras,
>in Operator Algebras and Applications, Volume 2, pages 119-172,
>edited by David E. Evans and Masamichi Takesaki (Cambridge 1988):

I would like to read and understand this some day. Whenever
I'm at Penn State visiting Ashtekar, Ocneanu talks my ear off
about this stuff. By the way, I'm not really showing off:
Ocneanu will talk to *anyone* about this stuff - it doesn't
matter if they understand what he's saying! Unfortunately he
goes too fast for me, and I can never remember much of what he
says, and he doesn't write many papers. I hadn't known about this
one - thanks for bringing it to my attention. All I knew is that
he'd been endlessly working on a book, and the NSF has finally
decided to pay for a grad student to help him finish it up.

He has lots of great ideas! But I can't understand them until
I take them and fit them into my own framework for understanding
things, and I've never gotten around to that yet.

> Segal wrote about Connes' book in the AMS Bulletin:

>>"... The book presents a sophisticated mathematical reformulation
>>of the standard model ... It is exemplary for the author to apply
>>his advanced mathematical expertise to theoretical physics;
>>higher mathematics has become all too introverted and needs
>>the challenge of external connections. However, the interpretation
>>given is basically descriptive rather than explanatory.
>>No resolution for the long-standing basic question of
>>why the electromagnetic interaction is invariant under space inversion
>>while the weak interactions are maximally noninvariant is proposed.

>>More broadly, the efforts toward issues in mathematical physics


>>seem to ignore some rather basic physical principles -
>>- causality, stability (effectively, positivity of the energy), etc -
>>- in favor of a quick technological fix. This appears to represent

>>a kind of naivete that contrasts oddly with very high level


>>of mathematical knowledgeability. ...".

>I find Segal's comments fascinating.

Basically he was pointing out that Connes' work has
not made quantum field theory more rigorously well-defined than
it had been previously, nor has it answered the questions
Segal wanted answered.

Amusingly, this book review outraged some bigshots in the
math community so much that they did a special 2-issue feature
praising Connes to the heavens in the AMS Notices.

What's amusing is that if Segal hadn't thought the book
was great, he would never have reviewed it! Getting Segal
to actually praise someone was well-nigh impossible. Merely
considering something worthy of attention was his equivalent
of flattery. His attitude to Connes was probably like his
attitude to the University of Chicago (where he used to teach):

"It's not all that good - it's just the best there is."

John Baez

unread,
Jan 18, 2002, 9:09:34 PM1/18/02
to
In article <l03102800b86625a6a746@[209.246.184.131]>,
Tony Smith <tsm...@innerx.net> wrote:

>In their book Coxeter Graphs and Towers of Algebras (Springer 1989),

>Goodman, de la Harpe, and Jones say [...]

It will take me quite a while to catch up with you here.
I just checked this book out of the library, and someday
I hope to understand it - but I'm pretty busy, so it might not happen soon.

>We can now use the McKay correspondence between finite subgroups
>of SU(2) and affine Coxeter graphs ... to calculate the Bratelli
>diagrams or principal graphs when G is finite. ...".
>
>Although I don't see where Goodman, de la Harpe, and Jones explicitly
>refer to Clifford algebras, it seems to me that
>their tensor product of factors of the form Mat2(C)
>must be based on the complex-Clifford-periodicity of order 2,

It's probably related but I don't know how. Jones pioneered
the construction of type II_1 subfactors using the quantum group SU_q(2),
and this involves taking the tensor product of infinitely many copies of
the spin-1/2 representation of SU_q(2) - i.e. infinitely many copies of C^2.
If you look at his stuff on braids, tangles and subfactors, you'll
see pictures where each strand corresponds to one copy of the spin-1/2
representation. In the above stuff, Jones & Co. seem to be looking
at how finite subgroups of SU(2) act on the whole picture.

So, something is going on which involves arbitrarily many copies
of a spin-1/2 particle, or more precisely, its q-deformed cousin.
This suggests that Bott periodicity for q-deformed Clifford algebras
could be secretly entering this game, but I'm not sure. I've
certainly never heard anyone come out and openly admit it. But
that doesn't necessarily mean much.

> Cl(2n,C) = Mat2^n(C) = Mat2(C) x ...(n times tensor)... x Mat2(C)

Right. I'm pretty sure the q-deformed complex Clifford algebra with
2 generators is (like the undeformed one) equal to Mat2(C), but with
different generators. So, this fits into my vague imaginings quite
nicely.

>Although I don't know offhand of a generalization of the McKay
>correspondence from SU(2) subgroups to Spin(16) subgroups,
>I do know that SU(2) = Spin(3) is a nice subgroup of Spin(16),
>so it seems that the subgroups of Spin(3) would extend to
>give subgroups of Spin(16) and I could then get the McKay results
>of A-D-E classification for the subgroups of Spin(16).

Hmm... there are a few attempts to soup up the McKay correspondence
to tackle groups fancier than SU(2), but I don't understand them.
This sounds like a fairly ambitious plan you've got.

I want to write more in This Week's Finds about subfactors and
2-categories, so I will keep thinking about this stuff... but
in a rather back-burner sort of way. I wish I had time to dive in!


Tony Smith

unread,
Jan 18, 2002, 4:07:05 PM1/18/02
to
I am trying to understand the relation (if any) between
the von Neumann hyperfinite II1 Clifford tensor product

... x Cl x Cl x Cl x Cl x Cl x Cl x Cl x ...

where conventionally Cl is Mat2(C) (with a natural SU(2) action)
and in my model Cl would be Mat16(R).

In the conceptual questions that follow, I won't distinguish
between the two cases, and just denote the Clifford factor by Cl.

Here are some questions that are puzzling me:

Could you consider the Clifford tensor product as
... Cl--Cl--Cl--Cl--Cl--Cl--Cl ...
a linear chain of Cl's ?

Could you consider each Cl in the linear chain as a spin-foam-node
in a linear spin-foam thing ?

Could you take that linear chain and,
like a long line of yarn, "fold" or "weave" it into a higher-dimensional
"array" or "tapestry" of Cl's ?

Prior to the folding/weaving, each Cl spin-foam-node in the linear chain would
have 2 nearest neighbors in the chain
... Cl--Cl--Cl--Cl--Cl--Cl--Cl ...

After the folding/weaving, could each Cl spin-foam-node in the tapestry
have more nearest neighbors ?

For example, to visualize, consider each Cl spin-foam-node as having
4 "arms" or "hooks" corresponding to { -x, +x, -t, +t }
(i.e., in this oversimplification which is sort of like Cl(2,R)
the 4 arms/hooks of each Cl would correspond
to + and - in the Cl(2,R) 2-dimensional vector space)
Therefore, denote each Cl spin-foam-node by the 4-armed symbol +
to get the linear chain

... +--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+--+-- ...

which might be folded/woven roughly as follows:

... +--+--+--+--+--+--+--+--+
|
+--+--+--+--+--+--+--+--+--+-- ...

... +--+--+--+--+--+--+--+--+--+
|
... +--+--+--+--+--+--+--+--+ +
| |
+--+


+--+--+--+
| |
+ +--+ +
| | | |
+ + +--+
| |
+ +--+--+--+--+--+--+--+--+--+--+-- ...
|
+--+--+--+--+--+--+--+--+--+--+--+-- ...


If you allow a natural (in my sense) nearest-neighbor-connection
among the folded/woven spin-foam-nodes,
then you might get:

... +--+--+--+--+--+--+--+--+
|
+--+--+--+--+--+--+--+--+--+-- ...


... +--+--+--+--+--+--+--+--+--+
| | | | | | | | | |
... +--+--+--+--+--+--+--+--+--+
| |
+--+

+--+--+--+
| | | |
+--+--+--+
| | | |
+--+--+--+
| | | |
+--+--+--+--+--+--+--+--+--+--+--+-- ...
| | | | | | | | | | | |
+--+--+--+--+--+--+--+--+--+--+--+-- ...


which, if you continue that pattern of folding/weaving indefinitely
in a natural (in my sense) way, you might end up with
a 2-dim square lattice that could be taken to be an Ising model,
or, equivalently, a Feynman checkerboard for the 2-dim Dirac equation.
(That equivalence has been shown
by Hal Gersch (Int. J. Theor. Phys. 20 (1981) 491).)

Of course, I have chosen a simple 2-dim model for visualization,
and shown only one way of folding/weaving.

Are the different ways of folding/weaving related to each other
in ways that can be described by braids, etc. ?
If so, how might that work ?

Are there "special" dimensions (perhaps 1,2,4,8 ??) in which
some sort of "equivalence classes" of folds/weaves
might have a "dominant-most-likely" equivalence class
that might correspond to a physically interesting model,
as, for example, the Ising/Feynman checkerboard in 2-dim ?
My guesses as to "special" structures that might emerge
(depending on how many "natural arms/hooks" each Cl node has,
and perhaps on some aspects of "folding/weaving")
include:
1-dim linear lattice of real integers;
2-dim complex Gaussian integer square lattice
(what about Eisenstein triangular lattice??)
4-dim quaternion "integral" lattice with each vertex
having 24 nearest neighbors (this combines
the symmetries of square and triangular)
8-dim octonion "integral" E8 lattice (there are 7 slightly different
versions of the E8 lattice) (this is the stuff I like
in my model with Cl = Mat16(R)).

Even if answers to these questions might not be completely clear now,
any comments would be appreciated.


Tony 18 Jan 2002


Tony Smith

unread,
Jan 21, 2002, 9:13:31 PM1/21/02
to
In posts to s.p.r., and in his week 175, John Baez has said:

"... what's a von Neumann algebra?

... The simplest example consists of all n x n complex matrices ...
... every type I_n factor is isomorphic to the algebra of n x n matrices ...

... the hyperfinite II_1 factor ...[is]... the union of all ...
2n x 2n matrices ... normalizing the trace on each of these matrix algebras
so that all the maps ... from the 2n x 2n matrices into
the 2n+1 x 2n+1 matrices ... are trace-preserving ...
... the algebra of 2n x 2n matrices is a Clifford algebra,
so
the hyperfinite II_1 factor is
a kind of infinite-dimensional Clifford algebra.

But the Clifford algebra of 2n x 2n matrices is secretly
just another name for the algebra
generated by creation and annihilation operators
on the fermionic Fock space over C2n ...

... something is going on which involves arbitrarily many copies


of a spin-1/2 particle, or more precisely, its q-deformed cousin.
This suggests that
Bott periodicity for q-deformed Clifford algebras
could be secretly entering this game,

but I[JB]'m not sure.
I[JB]'ve certainly never heard anyone come out and openly admit it. ...".


A question I[TS] have is based on these facts:

A complex Clifford algebra Cl(2n) is Mat2n(C),
the algebra of 2nx2n complex matrices;
and
complex Clifford periodicity says that
Cl(2n) = Cl(2) x ...(tensor n times)... x Cl(2).


The question is:

If von Neumann algebras of type I_infinity are a limit of type I_n
as n goes to infinity
and
if von Neumann algebras of type I_2n = Mat2n(C) = Cl(2n)
and
if complex Clifford periodicity says effectively that
Cl(2n) = Mat2(C) x ...(tensor n times)... x Mat2(C)
and
if von Neumann algebras of type hyperfinite II_1 are a union
of all Cl(2n) = M(2n)(C) algebras, each of which,
by complex Clifford periodicity, is effectively
Cl(2n) = Mat2(C) x ...(tensor n times)... x Mat2(C)

then

could it be say that, if limits and unions behave nicely,
(which may be non-trivial) by complex Clifford periodicity

type I_infinity is isomorphic to type hyperfinite II_1 ?

Some of my thoughts on that question are:

At first sight,
the facts that for I_infinity you can, as John Baez says,
"... can normalize the trace to get all the values {0,1,2,...,+infinity} ..."
while
for hyperfinite II_1, as John Baez says,
"... the trace of a projection can be any fraction in the interval [0,1]
whose denominator is a power of two. But actually,
any number from 0 to 1 is the trace of some projection in this algebra ...".

At first sight it looks to me as though
I_infinity traces {0,1,2,...,+infinity}
are
clearly a smaller set than
hyperfinite II_1 traces taking any number from 0 to 1,
but
if you use a term-by-term isomorphism of
I_2n Mat2n(C) to the hyperfinite II_1 term Cl(2n) = Mat2n(C)
and
then take the limit/union in a uniform way,
maybe
you could show an isomorphism of I_infinity to hyperfinite II_1,
or at least a homomorphism.


Also at first sight, I wonder whether the odd-n components of I_infinity
would mess up such an isomorphism or homomorphism,
but on second thought I would guess not, because by complex periodicity
Cl(2n+1) = Cl(1) x Cl(2n)
and the case Cl(2n+1) could always be embedded inside Cl(2n+2).

If such an isomorphism/homomorphism exists (or does not exist),
and if type I_infinity represents bosons
and if type hyperfinite II_1 represents fermions,
then
could existence (or nonexistence) of such an isomorphism/homomorphism
have some bearing on how
some form of boson-fermion supersymmetry might (or might not) work?


Any comments on this stuff would be appreciated.

Tony 21 January 2002

John Baez

unread,
Jan 22, 2002, 2:25:55 AM1/22/02
to
In article <l03102800b87268f1c84e@[67.192.38.154]>,
Tony Smith <tsm...@innerx.net> wrote:

>The question is:

[...]

>could it be say that, if limits and unions behave nicely,
>(which may be non-trivial) by complex Clifford periodicity
>
>type I_infinity is isomorphic to type hyperfinite II_1 ?

No, all the various "types" of factors appearing in the
classification of von Neumann algebras are fundamentally
different - not isomorphic to each other as von Neumann
algebras.

I can see someone guessing that the I_infinity and hyperfinite
II_1 factors were isomorphic because both are naively just algebras
of "infinity-by-infinity square matrices". But this ignores the
nitpicky subtleties that are the meat and potatoes of operator
algebra theory! To get the I_infinity factor we take a Hilbert
space of countably infinite dimension and form the algebra of
*all* bounded linear operators on it. This is closed under taking
adjoints and closed in the weak topology (see week175 for what I
mean by that), so it's a von Neumann algebra.

To get the hyperfinite II_1 factor we take the algebra of 1x1 matrices,
map it into the algebra of 2x2 matrices via

x -> x 0
0 x

then map that into the algebra of 4x4 matrices by the same formula,
and so on ad infinitum. Taking the union of all these algebras (or
more precisely the direct limit), we get a big algebra of infinity-
by-infinity square matrices. This is not yet a von Neumann algebra
because while closed under taking adjoints, it is not closed in the
weak topology. So, we take its closure in this topology and get a
von Neumann algebra - the hyperfinite II_1 factor. By the way we've
cooked this up, it consists of bounded linear operators on a Hilbert
space of countably infinite dimension... but: NOT ALL the bounded
operators!

So, the hyperfinite II_1 factor is a *subalgebra* of the I_infinity
factor, but there's no reason in god's green earth to think they're
*isomorphic*. They really have drastically different personalities:
the hyperfinite II_1 factor has a trace whose range on projections is
[0,1], while the I_infinity factor has a (more familiar) trace whose
range on projections is {0,1,2,...+infinity} - and NOT vice versa!

In physics, the hyperfinite II_1 factor is the von Neumann algebra
of observables in the abstract free fermionic quantum field, or in
other words, an infinite-dimensional version of a Clifford algebra.
The hyperfinite I_infinity factor is the von Neumann algebra of
observables in the abstract free bosonic quantum field, or in other
words, an infinite-dimensional version of the Weyl algebra.

They're just different. Despite supersymmetry, this ain't bad.

Vive la difference!

0 new messages