Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

What Ails the Fundamental Research in Physics

18 views
Skip to first unread message

Sandhu G

unread,
Dec 4, 1999, 3:00:00 AM12/4/99
to
Friends,
At the fag end of the 20th century let us review, reflect
and voice our opinion regarding the strengths and weaknesses of
front-line fundamental research in Physics. Let us pause and reflect on
our major achievements, major failures, and the missed opportunities if
any, during this 20th century. Are we on the right track? Or perhaps
in the grand maze of the unknown, have we somewhere missed the right
track and are now approaching a dead end?
The general opinion is that in all branches of applied
sciences, our progress and achievements have been tremendous. But in
theoretical Physics, which had been once regarded as Mother of all
sciences, the situation is quite grim. The opinion, of course, could be
divided. Today the best talent, the cream of our youth no longer find
it attractive to choose a career for doing fundamental research in
Physics. Why? Is it that at the end of 20th century, the fundamental
research in Physics is no longer that challenging a job as it was at the
beginning of the century?
Throughout the 20th century, Physicists have occupied
themselves with working out Quantum Mechanics and Relativity in all
their implications. In the process Physics has absorbed mathematical
ideas and notions of increasing sophistication and abstraction. The
Particle Physics too has been pushed into mathematical abstraction to
the limit. Today, ‘Unification’ is the theme, the backbone of modern
physics. But the meaning of Unification, in this context, is the search
for some grand mathematical structure which could link the mathematical
description of all separate phenomena. The mathematical structures
describing unification have certain elegance and power. It is
remarkable that physical interactions so evidently different as the
weak, electromagnetic and strong forces can be made to appear as
different aspects of the same thing. Quoting David Lindley from his book
The End of Physics, "The achievement of grand unification is a
mathematical tour de force. But is it any more than that? Does it lead
to prediction and test in the traditional way? Are Physicists truly
laying bare the fundamental laws of nature in these overarching
mathematical schemes, or is the beauty of unification entirely in the
eyes of the beholders?"
The tragedy of the 20th century, I suppose, is the gradual shift
in our focus from the physical reality to the abstract mathematical
formulations which are supposed to describe physical reality. The quest
for some grand unified theory, no longer implies a quest for some
fundamental theory which could fully explain separate phenomena of
physical reality, which could enable us fully understand and visualize
all aspects of physical reality.
Modern Particle Physics is, in a literal sense
incomprehensible. The most recent speculation of the theoretical
physicists is that elementary particles are not particles at all but
vibrations of tiny loops of quantum mechanical strings, wriggling around
in twenty-six dimensional space. Again quoting David Lindley, "Even
within the community of particle physicists there are those who think
that the trend towards increasing abstraction is turning theoretical
physics into recreational mathematics, endlessly amusing to those who
can master the techniques and join the game, but ultimately meaningless
because the objects of the mathematical manipulations are forever beyond
the access of observation and mental visualization."
In my opinion the community of Physicists in general, appear to
have been steadily indoctrinated into believing that due to complexity
of physical reality, we can not even demand deeper meaning, deeper
understanding and mental visualization of the basic phenomena in quantum
mechanical world. For example we are made to believe that the
interaction between two electrons takes place through the mutual
exchange of ‘virtual’ photons; but we are prohibited from demanding any
meaning, deeper understanding or mental visualization of such virtual
particles, or of such exchange interaction phenomenon. Even in most
simple case of 1s orbital in Hydrogen atom, we are made to believe that
the angular momentum of the 1s electron about the proton is zero, that
the electron in 1s orbital gets ‘spread’ or ‘distributed’ in a
sort of spherical symmetry around the proton. Further, we are prohibited from
attempting to mentally visualize the orbital motion of the electron
around the proton. This sort of mass hypnotisation of the Physicists in
general is so complete that they don’t even listen to or pay any heed to
any alternative point of view. This, in my opinion is the greatest
tragedy of the 20th century Physics.
As a consequence of our shift in the focus from the physical
reality to the mathematical description of physical reality, we have
inadvertently adopted certain abstract mathematical concepts into our
physical world view. For example the concepts of virtual particles,
of particles with negative energy content, of physical singularities
like infinite mass concentrated in zero volume associated with the
notions of big bang and black holes, might have actually come up as
abstract mathematical concepts. Instead of rejecting these concepts as
physically invalid, these have been erroneously adopted into our
physical world view. At the top of it, experimental proofs and
validations of such physically unacceptable mathematical concepts are
often claimed. Invariably the results and conclusions of all
experiments have to be drawn through interpretations of the observations
by making use of some generally accepted theories or assumptions.
Often, when the sophisticated experiments confirm or validate some
physically invalid concepts, the fault is not in the observations or the
techniques but with the interpretation of those observations. While
announcing the results of all sophisticated experiments, the underlying
theory or assumptions are never highlighted.
For example, the current belief that the universe is
continuously expanding, is said to be experimentally verified and well
established. Actually what is observed in the experiments is the red
shift in spectral lines. The crucial assumption made while interpreting
the red shift observation to mean expanding universe, is that the said
red shift is caused by the relative separation velocity of the source
and the observer. However, what is not highlighted is the fact that the
red shift in spectral lines could also be caused by loss of photon
energy during its passage through vast expanses of the universe. The
photon could lose a small fraction of its energy through scattering off
hydrogen atoms or cosmic dust particles or by virtue of its passage
through stellar gravitational and electromagnetic fields.
Agreed that fundamental research does require a lot of
mathematical support. Also agreed that the mathematics supporting the
fundamental research in Physics, may be highly complex and highly
abstract. But the end results of all the complex mathematical
processing, must be applicable to the Physics, must be applicable to the
physical world and hence must come within the grasp of human mind and
mental visualization. Just as a complex computer program may be written
in a low level coded language or even in machine language but the
efficacy of the program will be tested by its end results which must
come within the grasp of the user and must be expressed in some user
friendly language. If as an end result of a most sophisticated program
we see some strange characters on the screen, we say that the program
needs to be ‘debugged’. We never say that the program has produced some
abstract results which can not be fully grasped by the human mind. In
that sense we might say that most of the abstract mathematical Physics
today needs to be ‘debugged’.

G S Sandhu

<http://www.geocities.com/ResearchTriangle/Forum/9850/index.html>


Terry Padden - Chelwood Consulting

unread,
Dec 6, 1999, 3:00:00 AM12/6/99
to
Sandhu G <san...@ch1.dot.net.in> wrote in message
news:3849B2D4...@ch1.dot.net.in...

> The general opinion is that in all branches of applied
> sciences, our progress and achievements have been tremendous.

The measure of science is comprehension not technology. I would rate the
average (mean) educational standard of SPR participants as approximately
that of Year I Graduate School Physics. May I suggest that the recent
thread on SPR on " Instances of Zero " is a reasonably fair assessment of
our actual accomplishments. It seems we don't even know if Zero exists.

> Is it that at the end of 20th century, the fundamental
> research in Physics is no longer that challenging a job as it was at the
> beginning of the century?

No - it is still to some the most wonderful of challenges. Physics was never
really about jobs until the mid 20th century, when it became a relatively
well paid prestigious one excessively promoted to impressionable High School
Graduates (HSG's). Today's HSG's are very aware of the other options and
make more rational choices. For some 30 years now the best HSG's have
mostly opted for Law, Medicine, Vet Science, & Commerce MBA tracks for good
reasons. It seems that Science is now generally for second rate minds. It
is no loss. Let us take a long term (over centuries) view. Science has
really only ever been a vocation for an exceptional few - not a profession
for the many. The post war WW2 militiary driven inflationary expansion of
the Professional Scientific Universe was inevitably accompanied by a
Gresham's Law (Economics 201). Due to structural changes in funding and
technological obsolescence, these peculiarly 20th century arrangements for
big and ubiquitous science are now collapsing. Again it is no real loss.
Smaller is better.

> The tragedy of the 20th century, I suppose, is the gradual shift
> in our focus from the physical reality to the abstract mathematical
> formulations which are supposed to describe physical reality.

All mental construction (thinking) is abstract - i.e. we do not understand
how conscious observers do it. The only problem with any complex system,
abstract or otherwise, is that it takes time to study the workings properly
and it is easy to misunderstand such things. The tragedy is not the
application of abstract mathematics - it is the misunderstanding of abstract
mathematics (see my previous example on Zero), particularly by those at
Professorial level.

> the end results of all the complex mathematical
> processing, must be applicable to the Physics, must be applicable to the
> physical world and hence must come within the grasp of human mind and
> mental visualization.

Replace 'must' by 'may'; Insert 'some' before 'human'.

> Just as a complex computer program may be written
> in a low level coded language or even in machine language but the
> efficacy of the program will be tested by its end results which must
> come within the grasp of the user and must be expressed in some user
> friendly language. If as an end result of a most sophisticated program
> we see some strange characters on the screen, we say that the program
> needs to be 'debugged'.

Once the program is debugged does that mean the user can understand the
program code (machine or source) ? Science is about 'How" not 'What".
Rutherford used much stronger language.

The problem with physics is that the programs do NOT produce bugs. They
work perfectly to 12 places of decimals in QFT or GR; i.e. provided we use
the appropriate High Level Language (Cobol, Fortran, C, VB) for particular
problems we always get programs which work perfectly for the users. Yet we
know there is only one machine code (universe) so really we only need one
High level language, i.e. a Unified Theory. The problem is we don't
understand the compiler so we can't explain HOW our programs get to produce
correct results. Physicists are just applications programmers without
system manuals who think they understand machine code. Rather than admit
they don't understand computers yet, they write their own manuals and
converse in hexadecimal to impress/frighten the users. I know this works.

Such behaviour is a very understandable human weakness, but unfortunately
Absract Mathematics is evidently more powerful than Coke as an endomorphine
once taken intravenously (Topology 401). Like all addictions there is no
real cure, so maybe all we can do is (a) isolate Pure Mathematicians in a
commune somewhere and communicate with them only on a "need to Know, but
don't touch or speak" basis - I think they would like that; and (b) for
physicists, introduce a DUI type offence which prohibits publication of
papers written in mathematical language without an expressed coherent
epistemology and ontology - I don't think they would like that.
Unfortunately prohibition never works.

Fortunately Physics does not have muuch real need for Math to advance. Math
is mainly a janitorial function. Most advances in Physics (Galileo -
Inertia, Relative Motion, Common gravitational force, Newton - Laws of
Motion, Universal Gravitation, Compound Light), Faraday - Fields, Thomson -
Charged Particles, Rutherford - Structured Atoms, Einstein - Relativity and
Equivalence, Bohr - Non Classical Orbits, Correspondence and
Complementarity, Weyl -Gauge invariance) were conceptual and/or
experimentally based. Maths can fairly claim Maxwell -ElectroMagnetism and
Unification with Light, and Dirac - anti-matter. Heisenberg -
non-commutavity and Plank - Quantum are shared because although mathematical
they were discovered in experimental mode.

The addicts don't seem to mind the janitorial work so why don't we let them
keep on with it - it keeps scientific unemployment down. I know they make a
hell of a noise when they are working, so you should get some ear plugs or
make your own noises back at them.

Loved your note.

Terry Padden


Phillip Helbig

unread,
Dec 6, 1999, 3:00:00 AM12/6/99
to
In article <3849B2D4...@ch1.dot.net.in>, Sandhu G
<san...@ch1.dot.net.in> writes:

> For example, the current belief that the universe is
> continuously expanding, is said to be experimentally verified and well
> established. Actually what is observed in the experiments is the red
> shift in spectral lines. The crucial assumption made while interpreting
> the red shift observation to mean expanding universe, is that the said
> red shift is caused by the relative separation velocity of the source
> and the observer. However, what is not highlighted is the fact that the
> red shift in spectral lines could also be caused by loss of photon
> energy during its passage through vast expanses of the universe. The
> photon could lose a small fraction of its energy through scattering off
> hydrogen atoms or cosmic dust particles or by virtue of its passage
> through stellar gravitational and electromagnetic fields.

Enough has been written as to why the `tired light' hypothesis is
invalid (for example, you need a mechanism for frequency-independent
scattering).

It is not just an assumption that the only evidence for expansion comes
from redshifted spectral lines. There are many other lines of evidence.
In any case, it is not an assumption but more a conclusion.

Of course, if the redshift is not caused by expansion, you have created
more problems than you solve, and I'm not sure what you're trying to
solve in the first place.


--
Phillip Helbig Email ......... p.he...@jb.man.ac.uk
University of Manchester Tel. ... +44 1477 571 321 (ext. 2635)
Jodrell Bank Observatory Fax ................ +44 1477 571 618
Macclesfield Telex ................ 36149 JODREL G
UK-Cheshire SK11 9DL Web ... http://www.jb.man.ac.uk/~pjh/

************************ currently working at *******************************

Kapteyn Instituut Email (above preferred) hel...@astro.rug.nl
Rijksuniversiteit Groningen Tel. ...................... +31 50 363 4067
Postbus 800 Fax ....................... +31 50 363 6100
NL-9700 AV Groningen
The Netherlands Web ... http://gladia.astro.rug.nl/~helbig/

My opinions are not necessarily those of either of the above institutes.


John Baez

unread,
Dec 7, 1999, 3:00:00 AM12/7/99
to
In article <3849B2D4...@ch1.dot.net.in>,
Sandhu G <san...@ch1.dot.net.in> wrote:

> The general opinion is that in all branches of applied
>sciences, our progress and achievements have been tremendous. But in
>theoretical Physics, which had been once regarded as Mother of all
>sciences, the situation is quite grim.

I think it's only grim in so-called "fundamental" physics - that
is, the search for fundamental laws of nature. And the current
problems here are really just the fruit of the spectacular progress
this field has made earlier in the century. By the 1980s, we had
theories that could explain all the experimental data we could
get our hands on: namely, general relativity and the Standard Model.
Theory rapidly outpaced technology, to the point where getting new
data to spot the flaws in our theories seemed to require technology
beyond our economic means!

Of course, some cracks have opened up recently, like the possible
discovery of neutrino mass and nonzero cosmological constant. These
are discoveries that do *not* require super-high-energy particle
accelerators. But the predominant trend in fundamental physics
has been a turn from experiment to theory ever since the Superconducting
Supercollider was not funded by the U.S. Congress. I expect this
to continue for a while, barring surprises from CERN or something like
that.

Without much new data, theoreticians are left to their own devices
when it comes to the difficult task of unifying, correcting and simplifying
general relativity and the Standard Model. It's no surprise that
they are fumbling around. It's no disaster, either! This is what
science is like: long periods of fumbling around, punctuated by a
short periods of rapid progress. Taking the long view, physics has
just returned to its normal rate of progress.

This suggests that unless some surprising new development in fundamental
physics comes along, it's a good time to take a breather and let practical
technology - what we can actually *afford to build* - catch up with
what we can imagine building in principle.

>Today the best talent, the cream of our youth no longer find
>it attractive to choose a career for doing fundamental research in
>Physics. Why? Is it that at the end of 20th century, the fundamental
>research in Physics is no longer that challenging a job as it was at the
>beginning of the century?

I think the reason is related to what I said above. We already know
more fundamental physics than we know how to use. The way to make
easy progress now is to apply what we know to fields such as condensed
matter physics, chemistry, biology, and the associated technologies:
material science, electronics and photonics, nanotechnology, and
biotechnology. Using this we can lay the foundation for a new higher
level of fundamental experimental physics.

(If we want to, that is! I am not advocating a constant ramp-up in
technology and science, just pointing out a strategy for doing so.)

>Modern Particle Physics is, in a literal sense incomprehensible.

Not to me. Not to the people who do it. If one doesn't understand
particle physics, one can either study it or decide it's not worth the
bother. If one chooses the latter course, complaining that it's
"incomprehensible" seems like an odd use of time.


scott...@my-deja.com

unread,
Dec 10, 1999, 3:00:00 AM12/10/99
to
In article <3849B2D4...@ch1.dot.net.in>,
Sandhu G <san...@ch1.dot.net.in> wrote:

[snip]

.... For example, the current belief that the universe is


> continuously expanding, is said to be experimentally verified and well
> established. Actually what is observed in the experiments is the red
> shift in spectral lines. The crucial assumption made while
interpreting
> the red shift observation to mean expanding universe, is that the said
> red shift is caused by the relative separation velocity of the source
> and the observer. However, what is not highlighted is the fact that
the
> red shift in spectral lines could also be caused by loss of photon
> energy during its passage through vast expanses of the universe. The
> photon could lose a small fraction of its energy through scattering
off
> hydrogen atoms or cosmic dust particles or by virtue of its passage
> through stellar gravitational and electromagnetic fields.

"Tired Light" theories have arisen numerous times throughout the years.
The overwhelming consensus of the scientific communitiy is that they
are incorrect and the red shift is due to expansion. The scenarios for
photon energy loss that you describe have been disproven.

Scott


--
"gotta keep rockin while I still can
I gotta two pack habit and a motel tan
When my boots hit the boards I'm a brand new man
With my back to the riser I'll make my stand." Guitar Town, S. Earle


Sent via Deja.com http://www.deja.com/
Before you buy.


Jonathan Thornburg

unread,
Dec 10, 1999, 3:00:00 AM12/10/99
to
In article <3849B2D4...@ch1.dot.net.in>, Sandhu G
<san...@ch1.dot.net.in> wrote:
> the
> red shift in spectral lines could also be caused by loss of photon
> energy during its passage through vast expanses of the universe. The
> photon could lose a small fraction of its energy through scattering off
> hydrogen atoms or cosmic dust particles or by virtue of its passage
> through stellar gravitational and electromagnetic fields.

In article <82gd5h$70v$3...@info.service.rug.nl>,
Phillip Helbig <p.he...@jb.man.ac.uk> replied:


>Enough has been written as to why the `tired light' hypothesis is
>invalid (for example, you need a mechanism for frequency-independent
>scattering).

Another problem with "tired light" is that unless you're also going
to throw out conservation of momentum, when you exchange energy between
the photon and the scattering particle, you're also going to exchange
some momentum. This is unlikely to be exactly along the photon's
incident propagation direction, so the effect will probably be to
deflect the photon's path a bit. (Think "Compton scattering", but
now for optical rather than X-ray photons.)

By the time you've reduced the photon's energy by a factor of 5
(we observe quasars at redshift 5, i.e. with what look like Balmer
sequence photons but at 1/5 of the Balmer sequence's laboratory energy),
you'd expect that the path might have been scattered rather a lot.
But yet high-redshift quasars still show sharp images, with no extra
angular fuzziness visible in (say) Hubble Space Telescope images with
resolution better than 0.1 arc second (5e-7 radians). To be conservative
and allow for the "fuzz" often seen around quasars,
[High-redshift quasars often show sharp *cores*, but
surrounded by fuzzy halos, which look rather like
active galaxies at that redshift. In the standard
interpretation, the halos *are* active galaxies at that
redshift and distance, the quasars being supermassive
black holes in the center of those same galaxies.]
I'll take 1 arcsecond (5e-6 radians) as an observational upper bound
for the amount of angular scattering seen in redshift-5 photons.

So if we believe tired light, then we have to hypothesize a scattering
mechanism that's not only frequency-independent, but that also can
manage to somehow remove 80% of a photon's energy, while the direction
of its momentum vector intact to at least 99.9995% (or equivalently,
while giving it a transverse momentum < 5e-6 of its line-of-sight
momentum). That seems rather implausible.


For this and other reasons, only a tiny minority (< 1%) of professional
astronomers and astrophysicists consider "tired light" plausible.

[No, that "< 1%" is not a SWAG (= scientific wild-assed
guess). It's a "Fermi estimate" based on their being somewhere
between 5 and 10 thousand members of the American Astronomical
Society (the major professional society of astronomers and
astrophysicists), and my seeing only a small number of AAS
member authors (on the order of 10 distinct authors back when
I was an astronomy grad student in the mid-80s) writing papers
arguing for tired light. Similar estimates would apply to the
IAU (= International Astronomical Union) or other similar bodies
of professional astronomers/astrophysicists.]

--
-- Jonathan Thornburg <jth...@galileo.thp.univie.ac.at>
http://www.thp.univie.ac.at/~jthorn/home.html
Universitaet Wien (Vienna, Austria) / Institut fuer Theoretische Physik
"If you are either rich or a camel, you should, as a purely practical
calculation, enjoy life now [rather than in the hereafter]."
-- John Kenneth Galbraith


holey2

unread,
Dec 10, 1999, 3:00:00 AM12/10/99
to
Dear Dr. Helbig,

Are some of the lines scattering shows perpedicular?

I remain puzzled,

holey2


* Sent from RemarQ http://www.remarq.com The Internet's Discussion Network *
The fastest and easiest way to search and participate in Usenet - Free!


Maynard Handley

unread,
Dec 12, 1999, 3:00:00 AM12/12/99
to
In article <82k6ti$q50$1...@rosencrantz.stcloudstate.edu>,
ba...@galaxy.ucr.edu (John Baez) wrote:

>>Modern Particle Physics is, in a literal sense incomprehensible.
>

>Not to me. Not to the people who do it. If one doesn't understand
>particle physics, one can either study it or decide it's not worth the
>bother. If one chooses the latter course, complaining that it's
>"incomprehensible" seems like an odd use of time.

Let me structure this complaint a little differently.

The physics literature seems to have split into pop physics and mongraphs
with nothing in between. There is perceived to be a market for physics
books that are dumbed down to the point of idiocy (god forbid they include
a single equation), but there is not perceived to be a market for books
targetting say the intelligent physics junior giving an overview of the
math and physics ideas involved and how they connect together (eg how
these virtual particles one is always hearing about map to mathematics).

There are a few good cheap math books that attempt to cover modern math
ideas with the goal of rapid overview, not indoctrination into
mathematics.
University of Chicago Press has done a fine job here.
However I don't see that sort of thing happening in physics. My experience
has been that one has to struggle one's way to each new insight, reading a
variety of books that cover the material from a different perspective, and
this is NOT just something that can be rationalized as character
building---modern physics is huge, and educators cannot hope to produce
students capable of moving the field forward if the pedagogics in place
require the student to study for 15 years to master the relevant material.

Unfortunately the current incentive system does not seem to favor writing
such a text. Writing a pop physics tract makes sense as a financial
investment, while writing a monograph allows one to make prof, to gather
more grad students, to build the word on the way to one's Nobel prize etc.
But taking two years off to write "Really Modern Physics for physics
juniors" seems a dead-end career move, while writing descriptions of how
to visualize something, with a brief side-bar on what the math for this
concept looks like when one read's more advanced texts, seems fated to tar
one as being unable to understand heavy-duty math.

Maynard


leste...@earthlink.net

unread,
Dec 12, 1999, 3:00:00 AM12/12/99
to
On 4 Dec 1999 23:13:29 -0800, Sandhu G <san...@ch1.dot.net.in> wrote:

>Friends,
> At the fag end of the 20th century let us review, reflect
>and voice our opinion regarding the strengths and weaknesses of
>front-line fundamental research in Physics. Let us pause and reflect on
>our major achievements, major failures, and the missed opportunities if
>any, during this 20th century. Are we on the right track? Or perhaps
>in the grand maze of the unknown, have we somewhere missed the right
>track and are now approaching a dead end?

[...]

One issue that badly needs to be addressed in this context is whether
and if there are three dimensions applicable to the comprehension of
physics in general or more. Everything to date in regard to string
theory indicates the necessity of more than the conventional three
dimensions, but the underlying mechanical plausibility for such a
framework is never addressed much less demonstrated.

> The tragedy of the 20th century, I suppose, is the gradual shift
>in our focus from the physical reality to the abstract mathematical
>formulations which are supposed to describe physical reality.

Amen.

Regards - Lester

ref http://home.earthlink.net/~lesterzick

[Moderator's note: Unnecessary quoted text deleted. -MM]


ca31...@bestweb.net

unread,
Dec 13, 1999, 3:00:00 AM12/13/99
to
In article <handleym-101...@handma3.apple.com>,

hand...@ricochet.net (Maynard Handley) wrote:
> In article <82k6ti$q50$1...@rosencrantz.stcloudstate.edu>,
> ba...@galaxy.ucr.edu (John Baez) wrote:
>
> >>Modern Particle Physics is, in a literal sense incomprehensible.
> >
> >Not to me. Not to the people who do it. If one doesn't understand
> >particle physics, one can either study it or decide it's not worth
the
> >bother. If one chooses the latter course, complaining that it's
> >"incomprehensible" seems like an odd use of time.
>
> Let me structure this complaint a little differently.
>
> The physics literature seems to have split into pop physics and
mongraphs
> with nothing in between.

Memetic algorithms seem to be "in between" the pop and the monographs:

http://search.yahoo.com/bin/search?p=memetics
http://ink.yahoo.com/bin/query?p=memetics&hc=1&hs=15

It seems to have come from such ideas as spin glasses, annealing,
and computation. There are also some links on "quantum memetics":

http://ink.yahoo.com/bin/query?p=meme+quantum&hc=0&hs=0

MeMes: melt in your head, not in your hands.

Message has been deleted

Paul D. Shocklee

unread,
Dec 14, 1999, 3:00:00 AM12/14/99
to
Maynard Handley (hand...@ricochet.net) wrote:
: The physics literature seems to have split into pop physics and mongraphs
: with nothing in between. There is perceived to be a market for physics

: books that are dumbed down to the point of idiocy (god forbid they include
: a single equation), but there is not perceived to be a market for books
: targetting say the intelligent physics junior giving an overview of the
: math and physics ideas involved and how they connect together (eg how
: these virtual particles one is always hearing about map to mathematics).

There are a few nice books on modern physics which are accessible to
someone with an undergraduate-level understanding. If you're specifically
interested in quantum field theory (i.e. virtual particles and such) with
a mathematical bent, I can recommend:

_A Unified Grand Tour of Theoretical Physics_, Ian Lawrie
_Introduction to Elementary Particles_,Griffiths
_Quantum Field Theory_, Ryder

--
-----------------------------------------------------------------------------
( Paul Shocklee - physics grad student - Princeton University )


Colin Walker

unread,
Dec 14, 1999, 3:00:00 AM12/14/99
to
In article <82h0u4$pbp$1...@mach.thp.univie.ac.at>,
jth...@mach.thp.univie.ac.at (Jonathan Thornburg) wrote:

>Another problem with "tired light" is that unless you're also going
>to throw out conservation of momentum, when you exchange energy between
>the photon and the scattering particle, you're also going to exchange
>some momentum. This is unlikely to be exactly along the photon's
>incident propagation direction, so the effect will probably be to
>deflect the photon's path a bit. (Think "Compton scattering", but
>now for optical rather than X-ray photons.)

----

Here is a treatment of scattering for a tired light hypothesis.
It is frequency independent, and smearing due to scattering is
unobservable.

First, note that the amount of energy lost during each photonic
cycle is given by Hh, independent of wavelength. (Hubble's
constant times Planck: Hh = 2e-51 joule given H = 3e-18 hertz.)

To the best of my knowledge, Henrik Broberg was the first to
publish this finding (Apeiron 1987), and proposed a collision
model for the interaction.
Another obvious possibility is that EM radiation is quantized
in increments of Hh and possesses a zero-point energy of Hh/2,
in the same manner as a quantum mechanical oscillator. This
point of view is more in line with an emission model of the
interaction proposed by Dart (Apeiron 1994).

SCATTERING:

In the following, it is presumed that some sort of interaction
takes place during each cycle of a photon's motion, leading to
a loss of energy and momentum from the photon. This means that
the photon is continually undergoing small deflections, rather
than infrequent larger deflections by interactions with
electrons, for instance.

Given light of wavelength L = 10^-7 meter (1000 Angstroms),
the photon's momentum is given by p = h / L.
Supposing the photons lose energy Hh per cycle, the change in
momentum is given by dp = Hh / c.
For simplicity, assume that the change in momentum is
perpendicular to the trajectory of the light, in which case
the path of the photon will be deflected by an angle,
theta = dp / p, or 10^-33 radian per cycle.

Now consider light arriving from an object with redshift,
z = 0.1, corresponding to a distance of about s = 10^25 meter,
and requiring about N = 10^33 photon cycles.
The worst-case estimate of the scattering angle occurs when
all deflections take place in one direction giving
[N theta] = 0.1 radian.
However, a more reasonable way to proceed is to assume that
deflections to the left, for example, are just as likely as
deflections to the right.
The natural analogy here is to decide which direction is taken
on the basis of flipping a coin in the manner of Bernoulli
trials.
Let K be the number of heads observed in N trials.
Then the variable K' = 2(K - N/2) / sqrt(N) is "standardized
normal", enabling statements to be made concerning the
probability of deviations of K from the mean, N/2.
For instance, if K' = 3.88, then 99.98% of sequences of
N coin tosses are expected to yield less than
[K' sqrt(N) / 2] = 2e16 surplus heads or tails. (N=10^33)
The total angular deflection corresponding to this surplus
is [2e16 theta] = 2e-17 radian. At a distance of 10^25 meter,
this arc subtends a distance of 2e8 meter, or about one eighth
the diameter of the sun.
Thus smearing due to scattering would be difficult to detect.

A more rigorous treatment might be necessary for z=5, since the
wavelength changes appreciably, but it is clear that smearing
due to scattering will be no more than 50 times that for z=0.1,
since the angular deflection per cycle decreases.
So even at z=5, smearing would be hard to detect.

APPENDIX - DERIVATION OF Hh:

It is shown that under the hypothesis of tired light, the
energy lost by each photon would be Hh per cycle, independent
of the wavelength. (It is assumed above that this has some
physical significance, i.e., something happens every cycle.)

Hubble's law is usually presented as v=Hs, where v is the
recession velocity, and s is the distance travelled by light.
The velocity is inferred from the shift, dL, in the wavelength
of light, L, using the Doppler approximation v = c dL / L.
Since the velocity is inappropriate in the tired light model,
Hubble's law can be rewritten as [c dL / L] = H s.

Let us first determine the shift in wavelength as the photon
travels a distance s = L + dL, i.e., the photon goes through
one cycle. (Note: if you neglect the change in wavelength, dL,
and substitute s = L instead, you will end up with the same
result (Hh), but it will be seen to be a very accurate
approximation instead of the exact result obtained here.)
So after undergoing one cycle, [c dL / L] = H (L + dL).
This gives dL = H L^2 / (c - H L).
It then an easy matter to find the difference in energy
dE = hc/L - hc/(L+dL) = Hh.

CONCLUSION:

Although the above treatment required some speculation
about the mechanism of tired light, it shows that the
lack of observable smearing is not necessarily a problem
for that hypothesis, at least in principle.

The real problem with tired light is the finding of time
dilation at distant sources. Cosmologists argue that the
redshift, if it is not due to recession, might come from
energy loss or time dilation, but not both. And so
tired light is routinely omitted from analyses of data
such as the recent distant supernova observations.
What is lacking in the tired light model is a resolution
to this apparent contradiction.

Until the contradiction between energy loss and time
dilation can be resolved theoretically, (and published,
and publicized) the tired light model will continue to
languish, even though statistical analysis might show
that tired light fits intensity vs redshift better than
other models.

John Baez

unread,
Dec 14, 1999, 3:00:00 AM12/14/99
to
In article <handleym-101...@handma3.apple.com>,
Maynard Handley <hand...@ricochet.net> wrote:

>In article <82k6ti$q50$1...@rosencrantz.stcloudstate.edu>,
>ba...@galaxy.ucr.edu (John Baez) wrote:

>>Someone complained:

>>>Modern Particle Physics is, in a literal sense incomprehensible.

>>Not to me. Not to the people who do it. If one doesn't understand


>>particle physics, one can either study it or decide it's not worth the
>>bother. If one chooses the latter course, complaining that it's
>>"incomprehensible" seems like an odd use of time.

>Let me structure this complaint a little differently.
>

>The physics literature seems to have split into pop physics and mongraphs
>with nothing in between. There is perceived to be a market for physics
>books that are dumbed down to the point of idiocy (god forbid they include
>a single equation), but there is not perceived to be a market for books
>targetting say the intelligent physics junior giving an overview of the
>math and physics ideas involved and how they connect together (eg how
>these virtual particles one is always hearing about map to mathematics).

Okay. I have much more sympathy with *this* complaint! There is
indeed a bit of a gap between the stuff that's too soupy to be
nourishing and the stuff that's too concentrated to be digestible.
The intermediate stuff exists, but there's not enough of it, and
it's sort of hard to find.

If your "intelligent physics junior" came to me and wanted to quickly
survey the big picture in physics, before diving into the meaty
technical details, I'd tell them to start by reading these books:

Emilio Segre, From Falling Bodies to Radio Waves: Classical Physicists
and Their Discoveries, W. H. Freeman, New York, 1984.

Emilio Segre, From X-Rays to Quarks: Modern Physicists and Their Discoveries,
W. H. Freeman, San Francisco, 1980.

Abraham Pais, Inward Bound: of Matter and Forces in the Physical World,
Clarendon Press, New York, 1986.

Robert P. Crease and Charles C. Mann, The Second Creation:
Makers of the Revolution in Twentieth-Century Physics, MacMillan,
New York, 1986.

Knowing a bit of the history of physics is very helpful if you want
to understand what people are interested in now! These books cover
the history in a nice way - not too soupy, not too concentrated.
The first three are written by actual physicists. The fourth isn't,
and it shows, but it's still good.

Then I would tell them to read these books, which focus on the
fundamental concepts of physics, and get a bit more technical:

M. S. Longair, Theoretical Concepts in Physics, Cambridge U. Press 1986.

Richard Feynman, Robert B. Leighton and Matthew Sands, The Feynman
Lectures on Physics, Addison-Wesley, 1989.

Then, for a brief tour of quantum field theory, I'd tell them to read
this:

Richard Feynman, QED: the Strange Theory of Light and Matter,
Princeton University Press, Princeton, 1985.

And for a brief tour of general relativity I'd tell them to read
these:

Kip S. Thorne, Black Holes and Time Warps: Einstein's Outrageous
Legacy, W. W. Norton, New York, 1994.

Robert M. Wald, Space, Time, and Gravity: the Theory of the Big Bang
and Black Holes, University of Chicago Press, 1977.

All three are written by experts. They aren't technical, but
to understand them fully it's nice to have some background.

(Out of ignorance, I haven't included books on astrophysics,
condensed matter physics, and all sorts of other good stuff.
Maybe other readers can fill in some more gaps.)

Finally, I'd tell the young physicist to make sure to regularly
read sci.physics.research, and ask lots of good technical questions.


Maynard Handley

unread,
Dec 15, 1999, 3:00:00 AM12/15/99
to
In article <handleym-101...@handma3.apple.com>,
Maynard Handley <hand...@ricochet.net> wrote:
> there is not perceived to be a market for books
> targetting say the intelligent physics junior giving an overview of the
> math and physics ideas involved and how they connect together (eg how
> these virtual particles one is always hearing about map to mathematics).

Hmm. I won't comment on the academic book market in general, but here
are a few physics books which I think count as counterexamples...

First off, the all-time classics, still in print after 36+ years:

Richard P. Feynman and Robert B. Leighton and Matthew Sands
"The Feynman Lectures on Physics:
Volume I: Mainly Mechanics, Radiation, and Heat"
ISBN 0-201-02010-6 (hardcover), 0-201-02116-1 (paperback)

Richard P. Feynman and Robert B. Leighton and Matthew Sands
"The Feynman Lectures on Physics:
Volume II: Mainly Electromagnetism and Matter"
ISBN 0-201-02011-4 (hardcover), 0-201-02117-X (paperback)

Richard P. Feynman and Robert B. Leighton and Matthew Sands
"The Feynman Lectures on Physics:
Volume III: Quantum Mechanics"
ISBN 0-201-02014-9 (hardcover), 0-201-02118-8 (paperback)

Next, a beautiful book by a well-known UK astrophysicist, focusing on
just the sort of "overview" topics Manyard was asking for:

Malcolm S. Longair
"Theoretical Concepts in Physics"
ISBN 0-521-25550-3 (hardcover), 0-521-27553-9 (paperback)

Next, a delightful book of general-undergraduate physics problems
(with solutions!):

N. Thompson
"Thinking Like a Physicist"
Adam Hilger, Boston
ISBN 0-85274-513-3 (paperback)


My remaining candidates are all more focused on astrophysics:
Here's a great collection of review articles (with an interesting pair
of mildly-controlversial opinion pieces thrown in for good measure)
from the mid-1980s:

Stuart L. Shapiro and Saul A. Teukolsky
"Highlights of Modern Astrophysics: Concepts and Controversies"
ISBN 0-471-82421-6

Finally, although it's 25ish years old now, and getting rather dated
in places, I can't resist mentioning

Eugene H. Avrett
"Frontiers of Astrophysics"
ISBN 0-674-32659-8 (hardcover), 0-674-32660-1 (paperback)

--
-- Jonathan Thornburg <jth...@galileo.thp.univie.ac.at>
http://www.thp.univie.ac.at/~jthorn/home.html
Universitaet Wien (Vienna, Austria) / Institut fuer Theoretische Physik

Amount of all stock owned by the least wealthy 90% of America: 18%
Amount of all stock owned by the most wealthy 1% of America: 41%
-- Economic Policy Institute


sand...@my-deja.com

unread,
Dec 16, 1999, 3:00:00 AM12/16/99
to
In article <82gd5h$70v$3...@info.service.rug.nl>,

p.he...@jb.man.ac.uk wrote:
> In article <3849B2D4...@ch1.dot.net.in>, Sandhu G
> <san...@ch1.dot.net.in> writes:
>
> > For example, the current belief that the universe is continuously
> > expanding, is said to be experimentally verified and well
> > established. Actually what is observed in the experiments is the
> > red shift in spectral lines. The crucial assumption made while
> > interpreting the red shift observation to mean expanding universe,
> > is that the said red shift is caused by the relative separation
> > velocity of the source and the observer. However, what is not
> > highlighted is the fact that the red shift in spectral lines could
> > also be caused by loss of photon energy during its passage through
> > vast expanses of the universe. The photon could lose a small
> > fraction of its energy through scattering off hydrogen atoms or
> > cosmic dust particles or by virtue of its passage
> > through stellar gravitational and electromagnetic fields.
>
> Enough has been written as to why the `tired light' hypothesis is
> invalid (for example, you need a mechanism for frequency-independent
> scattering).
> It is not just an assumption that the only evidence for expansion
> comes from redshifted spectral lines. There are many other lines of
> evidence. In any case, it is not an assumption but more a conclusion.
>
> Of course, if the redshift is not caused by expansion, you have
> created more problems than you solve, and I'm not sure what you're
> trying to solve in the first place.
>
> --
> Phillip Helbig

Encountering and solving problems is an inherent feature of evolution
and development and we need not be scared of them. What I am trying to
assert here is that adoption of physically invalid concepts like Big
Bang should not be accepted as 'fate accompli'. In this connection let
me reproduce a message from "David F. Crawford"
<d.cra...@physics.usyd.edu.au>
received in response to the original post.

----------------------------------------------------------------

"David F. Crawford" wrote:
............

>I heartily agree with most of your statements. Especially in that
>Physics has lost much of its intuitive apeal and is dominated by
>abstract mathematics.
>
>If you are interested in alternative cosmological theories the
>following reference may be of interest to you. I believe that the
>accurate predictions and agreement of the theory with many observations
>suggests that it be given serious consideration. The paper discusses
>several observations, that at first sight, appear to be in
>disagreement. I welcome criticism and discussion.
>
> Regards
> David F. Crawford
>
> "Curvature pressure in a cosmology with a tired-light redshift".
> D. F. Crawford, 1999, Australian J. Phys. 52, 753. and
> http://xxx.lanl.gov/abs/astro-ph/9904131
>
> ABSTRACT
> A hypothesis is presented that electromagnetic forces that prevent
> ions from following geodesics results in a curvature pressure in a
> Tired-Light Cosmology's. It may partly explain the solar neutrino
> deficiency and it may be the engine that drives astrophysical jets.
> However the most important consequence is that, using general
> relativity without a cosmological constant, it leads to a static and
> stable cosmology. Combined with an earlier hypothesis of a
> gravitational interaction of photons and particles with curved
> spacetime a static cosmology is developed that predicts a Hubble
> constant of H=60.2km/s/Mpc and a microwave background radiation with
> a temperature of 3.0 K. The background X-ray radiation is explained
> and observations of the quasar luminosity function and the angular
> distribution of radio sources have a better fit with this cosmology
> than they do with standard big-bang models. Although recent results
> (Pahre et al 1996) for the Tolman surface brightness test favor the
> standard big-bang cosmology they are not completely inconsistent with
> a static tired-light model. Most observations that imply the existence
> of dark matter measure redshifts, interpret them as velocities, and
> invoke the virial theorem to predict masses that are much greater than
> those deduced from luminosity's. If however most of these redshifts
> are due to the gravitational interaction in intervening clouds no
> dark matter is required. Observations of quasar absorption lines. a
> microwave background temperature at a redshift of z=1.9731, type 1a
> supernova light curves and the Butcher-Oemler effect are discussed.
> The evidence is not strong enough to completely eliminate a
> non-evolving cosmology. The result is a static and stable cosmological
> model that agrees with most of the current observations.
--------------------------------------------------------------

G S Sandhu

<http://www.geocities.com/ResearchTriangle/Forum/9850/index.html>

Sent via Deja.com http://www.deja.com/
Before you buy.


t...@rosencrantz.stcloudstate.edu

unread,
Dec 16, 1999, 3:00:00 AM12/16/99
to
In article <833rga$sls$1...@nnrp1.deja.com>,
Colin Walker <jav...@hotbot.com> wrote:

>Here is a treatment of scattering for a tired light hypothesis.
>It is frequency independent, and smearing due to scattering is
>unobservable.

I think it's true that you can get around the scattering problem in
this way. If the "tiring" is due to an extremely large number of
extremely small kicks, then you can make the angular deflection very
small. In this model, a photon's propagation direction undergoes a
random walk with of order f/H steps, each of which induces a shift in
direction of order H/f. The net shift is of order sqrt(H/f), which is
tiny.

Of course, there's no known mechanism for producing such kicks. (I
imagine it'd be pretty tough to come up with one that was even close
to consistent with special relativity, too.) To make matters worse, I
don't think it's possible to get around the other problems, such as
the time dilatation of supernova light curves and the blackbody
spectrum of the microwave background. As Ned Wright explains at

http://www.astro.ucla.edu/~wright/tiredlit.htm

in a tired-light model a blackbody curve doesn't remain blackbody as
it redshifts. The frequencies redshift, but the number of photons
doesn't get diluted as it does in an expansion. You need both to
maintain the blackbody form. As Wright points out, even if you assume
the microwave background was produced very nearby, say at a redshift
of 0.1, the spectrum differs from what FIRAS observed by many
thousands of sigma. Things are really much worse than that, since the
CMB must come from much greater distances than that. A blackbody
spectrum comes from an optically thick source. The fact that we see
things out to redshifts of several means that any such source must be
pretty far away.

In fact, according to Wright's calculations, the CMB in a tired-light
model must come from a distance of less than 0.25 Mpc, roughly the
size of our own Galaxy. The Universe is certainly not optically
thick at distances like that!

-Ted

Jonathan Thornburg

unread,
Dec 16, 1999, 3:00:00 AM12/16/99
to
In article <handleym-101...@handma3.apple.com>,
Maynard Handley <hand...@ricochet.net> wrote:
> there is not perceived to be a market for books
> targetting say the intelligent physics junior giving an overview of the
> math and physics ideas involved and how they connect together (eg how
> these virtual particles one is always hearing about map to mathematics).

In article <83832i$9dv$1...@mach.thp.univie.ac.at>,
I (Jonathan Thornburg <jth...@galileo.thp.univie.ac.at>) replied


| Hmm. I won't comment on the academic book market in general, but here
| are a few physics books which I think count as counterexamples...
|
| First off, the all-time classics, still in print after 36+ years:

Unfortunately, due to carless editing on my part, I mistakenly sent
off my article <83832i$9dv$1...@mach.thp.univie.ac.at> with the header
line
From: hand...@ricochet.net (Maynard Handley)
copied from Maynard's original article, and the s.p.r. moderator didn't
notice my mistake. The net result was that my article appeared with
this incorrect header line, thus appearing (different sense of "appear"
there!) to have been written by Maynard. [My correct identification
did appear in the signature of the article, but not all newsreaders
show signatures.]

I apologise for the mistaken attribution of my writings to Maynard.
Hopefully this message will be linked into this thread in the various
archives, preventing future readers from being _too_ confused...

--
-- Jonathan Thornburg <jth...@galileo.thp.univie.ac.at>
http://www.thp.univie.ac.at/~jthorn/home.html
Universitaet Wien (Vienna, Austria) / Institut fuer Theoretische Physik

"Stocks are now at what looks like a permanent high plateau" -- noted
economist Irving Fisher, 2 weeks before the 1929 stock market crash


John Baez

unread,
Dec 16, 1999, 3:00:00 AM12/16/99
to
In article <830u8r$ogg$1...@cnn.Princeton.EDU>,

Paul D. Shocklee <shoc...@Princeton.EDU> wrote:

>I can recommend:

>_A Unified Grand Tour of Theoretical Physics_, Ian Lawrie
>_Introduction to Elementary Particles_,Griffiths

Can you say a bit more about what these are like? I know
it's a bit lazy for me to ask you for some book reviews
instead of just looking at them myself, but having written
lots of book reviews myself....

In particular: what kind of stuff do they cover, how much math
do they assume the reader knows, and do they help one improve
ones physical intuition?

Toby Bartels

unread,
Dec 16, 1999, 3:00:00 AM12/16/99
to
John Baez <ba...@galaxy.ucr.edu> wrote:

>Finally, I'd tell the young physicist to make sure to regularly
>read sci.physics.research, and ask lots of good technical questions.

Yes! I think a lot of what we do here fits in that gap pretty well.


-- Toby


Maurice Barnhill

unread,
Dec 16, 1999, 3:00:00 AM12/16/99
to
Jonathan Thornburg wrote:

> In article <handleym-101...@handma3.apple.com>,
> Maynard Handley <hand...@ricochet.net> wrote:

> > there is not perceived to be a market for books
> > targetting say the intelligent physics junior giving an overview of the
> > math and physics ideas involved and how they connect together (eg how
> > these virtual particles one is always hearing about map to mathematics).

> Hmm. I won't comment on the academic book market in general, but here
> are a few physics books which I think count as counterexamples...

> [snip]

In some ways the list of counter-examples supports Maynard Handley's
comments. The Feynman books were well-edited transcripts of lectures
to *freshmen*; I expect that their turning out to be more relevant for
juniors and graduate students than for freshmen was unanticipated by the
publishers. In addition, astronomy and astrophysics has long had an
excellent literature of this type, so the last two texts mentioned are
not surprising. This leaves only two books deliberately published to be
of the type Handley calls for, and one of these is written by an
astrophysicist.

Perhaps the answer to the problem is to use the World-Wide Web as the
publishing medium. The author won't make any money, but at least he/she
will have an audience. Such books might make good projects for
recently-retired faculty members.
--
Maurice Barnhill, m...@udel.edu
http://www.physics.udel.edu/~barnhill/
Physics Dept., University of Delaware, Newark, DE 19716


Maynard Handley

unread,
Dec 18, 1999, 3:00:00 AM12/18/99
to
In article <8368ek$g...@charity.ucr.edu>, ba...@galaxy.ucr.edu (John Baez) wrote:

>In article <handleym-101...@handma3.apple.com>,
>Maynard Handley <hand...@ricochet.net> wrote:

Since I started this complaint, perhaps I should chime in with a little more.

In mathematics, I thoroughly enjoyed

Gabriel Weinreich, Geometrical Vectors.
Not the deepest book in the world, but after reading it, you can't help
but view every integral and every vector PDE you see in a totally new
light. My only complaint----a complaint I'd likewise direct at every book
I've ever read on the subject---is that by not point out a single little
one paragraph point, this material seems weird and bizarre until one
figures it out oneself.
In say an area integral of the form
Integral omega(x, y, z) dx dy
A
in the good old Riemann interpretation dx, dy correspond to bits of area.
In some sense the area specification is split over A and dx dy.
In the forms interpretation of this, the dx dy are COMPLETELY associated
with omega. They might as well be called omega-hat_x and
omega-hat_y---they specify only the directionality properties of omega.
Meanwhile the area specification lives only in A. Yeah, this is obvious
when you know this stuff. But when you are learning it, this is not
trivial. It is very hard to simply suspend belief when one is told dx is
now a vector (basis form), when one is not told that one needs to
reinterpret integrals in this way.

Robert Geroch, Mathematical Physics.
The most inspiring math book I've ever read. I get weepy when I think of
this and how much light it shed on so many different subjects. Any junior,
senior, 1st yr grad physics student reading this---buy this book today and
read a chapter a night before you go to sleep. You will be amazed at how
smart you are at the end of it.

But I don't see physics books that fill this same sort of void.
Maybe I was an idiot---though I don't think so---I got better grades than
most of my classmates as undergrad, and as grad at Cornell. I think the
problem was more that I wanted to really understand something, not accept
some half-assed explanation.
Topics I had difficulty with in physics at various points include

Lagragians:
Not how to use them to generate equations, but first, the perverted
mathematical notation physicists use in this area, and second the point
that a Lagragian isn't really a point function on configuration
space---it's a function on the tangent bundle, or, in simpler terms, it's
a function associated with a world line through configuration space, not
with single points on that world line.

Hamiltonians:
One gets into a whole mess of stuff here about action-angle variables,
Hamilton's principle function etc, and let's face it---who the hell cares?
Yeah the action-angle stuff had its day in the 19teens before
Schrodinger's equation, when people managed to figure out a few quantum
properties using it, but QM has moved on since then. The books claim it is
important in celestial mechanics. My answer would be---give a compelling
example, or don't waste our time and use the lecture space to teach
something more useful---like some modern differential geometry. Likewise
for the principle function stuff---so we get some bizarre analogy to
something that looks like optics if you squint hard and ignore
polarization. The books then claim there is something deep here that will
come out when one studies gauge fields and path integrals. My opinion (and
I don't mean this flippantly) is, great, teach the analogy at that point,
but don't waste time teaching this earlier.

Optics:
The point here is that
(1) laser light is simple light. Laser light, although generated by
strange quantum effects is like the signal you get from an aerial---a
classical EM wave with long term coherence, and easily understood as a
classical EM wave while
(2) "normal" light is this godawful mess.
Optics books tend, first to not mention that normal light is a godawful
mess quite unlike the ideal infinite waves they discuss, so that it's not
clear why, for example, you have to send the light through slits to get
diffraction. Meanwhile laser light is presented as this strange
complicated phenomenon, so that any student who does figure out it matches
perfectly the ideal infinite waves of the book is convinced they're
somehow missing the point.

Gauge transformation:
This is not hard. But I've yet to see a book that explains this
without making it absolutely incomprehensible.
(1) All we see is E and B.
(2) E and B do not uniquely determine A.
(3) This means we can change A in ways that preserve E and B to make our
lives easier. This is no different from, in a problem involving gravity,
deciding to direct the z axis parallel to the gravitational field rather
than oriented at 27 degrees with respect to it.
That's it---there's no need to try to turn this into a deep mystery.
For advanced students (preferably after an intro to GR):
(4) Why do we care about gauge transformation? Because we want to know
where field equations come from, and how to make up new field equations
that might represent new fields. A field equation attaches some sort of
value to each point in space. How can we compare the values at two points
in space? (Eg the phase of the electron field at two points in space).
(Post GR they should appreciate that this is non-trivial). One needs some
sort of connection.
Gauge transformations are tied up with these connections.

Solid State:
The problem here is that books are riddled with useless explanations
of how something can be "explained" in terms of ballistic electrons
following the rules of classical mechanics. Exactly what problem is this
solving? I'd rather have the solid state course concentrate on explaining
a few things correctly than giving basically useless pseudo-explanations
of a wide range of phenomena.

I hope this clarifies my point about the gap between the quality of
expository books in math vs in physics. While I have no complaints with
the books John Baez indicated below, they're too simple compared to the
issues I'm trying to raise. Eg QED and Feyman lectures are fine works, but
not at the level I am considering.

BTW in terms of physics history books, I am currently reading _Quantum
Generations_ by Helge Kragh, a fairly recent book. The author is not a
physicist, nonetheless I have been amazed at just how good a book it is,
both in terms of readability and in terms of clarifying the history of
science. Without wanting to slight any of the books suggested below, I'd
thoroughly recommend this to everyone.

Maynard


>>In article <82k6ti$q50$1...@rosencrantz.stcloudstate.edu>,
>>ba...@galaxy.ucr.edu (John Baez) wrote:
>
>>>Someone complained:
>

>>>>Modern Particle Physics is, in a literal sense incomprehensible.
>

>>>Not to me. Not to the people who do it. If one doesn't understand
>>>particle physics, one can either study it or decide it's not worth the
>>>bother. If one chooses the latter course, complaining that it's
>>>"incomprehensible" seems like an odd use of time.
>
>>Let me structure this complaint a little differently.
>>
>>The physics literature seems to have split into pop physics and mongraphs
>>with nothing in between. There is perceived to be a market for physics
>>books that are dumbed down to the point of idiocy (god forbid they include

>>a single equation), but there is not perceived to be a market for books


>>targetting say the intelligent physics junior giving an overview of the
>>math and physics ideas involved and how they connect together (eg how
>>these virtual particles one is always hearing about map to mathematics).
>

Charles Francis

unread,
Dec 20, 1999, 3:00:00 AM12/20/99
to sci-physic...@moderators.uu.net
In article <82k6ti$q50$1...@rosencrantz.stcloudstate.edu>, John Baez
<ba...@galaxy.ucr.edu> writes

>This suggests that unless some surprising new development in fundamental
>physics comes along, it's a good time to take a breather and let practical
>technology - what we can actually *afford to build* - catch up with
>what we can imagine building in principle.

I think that is what we have been doing for the last fifty or more
years.

>>Today the best talent, the cream of our youth no longer find
>>it attractive to choose a career for doing fundamental research in
>>Physics. Why? Is it that at the end of 20th century, the fundamental
>>research in Physics is no longer that challenging a job as it was at the
>>beginning of the century?

>I think the reason is related to what I said above. We already know

>more fundamental physics than we know how to use. The way to make
>easy progress now is to apply what we know to fields such as condensed
>matter physics, chemistry, biology, and the associated technologies:
>material science, electronics and photonics, nanotechnology, and
>biotechnology. Using this we can lay the foundation for a new higher
>level of fundamental experimental physics.

That is not the basis for new research into theoretical physics. We have
more fundamental theory than we know how to express in a simple, unified
manner. Real fundamental research means putting existing theory into the
simplest, and most intuitive form (there is a quote from Maxwell to this
effect). Like expressing Einstein's field equation in English (as you
did), so that if someone has an idea for a mechanism that would result
in the field equation, he can see from the English explanation that he
is on the right lines, and also that he has made a simple mistake in a
proof, without having to become an expert in gtr before he can say
anything at all. Thanks very much by the way.

>>Modern Particle Physics is, in a literal sense incomprehensible.

>Not to me. Not to the people who do it. If one doesn't understand


>particle physics, one can either study it or decide it's not worth the
>bother. If one chooses the latter course, complaining that it's
>"incomprehensible" seems like an odd use of time.

This is really the Emperor's new clothes. Why is only Feynman able to
admit that he does not understand the fundamental laws of particle
physics? There is a difference between being able to learn and work with
the laws of quantum mechanics, and actually understanding on a
theoretical basis why we have the laws of quantum mechanics. You have to
recognise that the laws of quantum mechanics are not understood to
recognise that this the major problem of our era. Not sweep them under
the carpet and claim that they are understood. If so many mathematicians
who claim to understand quantum mechanics, really understood quantum
mechanics, why are they unable to explain themselves, and why do they
never even try to explain themselves, and instead resort to disparaging
the intelligence of anyone else.

--
Regards

Charles Francis
cha...@clef.demon.co.uk


Terry Padden - Chelwood Consulting

unread,
Dec 21, 1999, 3:00:00 AM12/21/99
to
Charles Francis <cha...@clef.demon.no.junk> wrote in message
news:jWubaJBh...@clef.demon.co.uk...

> That is not the basis for new research into theoretical physics. We have
> more fundamental theory than we know how to express in a simple, unified
> manner. Real fundamental research means putting existing theory into the
> simplest, and most intuitive form (there is a quote from Maxwell to this
> effect). Like expressing Einstein's field equation in English (as you
> did), so that if someone has an idea for a mechanism that would result
> in the field equation, he can see from the English explanation that he
> is on the right lines, and also that he has made a simple mistake in a
> proof, without having to become an expert in gtr before he can say
> anything at all. Thanks very much by the way.

(a) Hear bloody Hear,

but (b), What of the possibility that current ( = last 50 years) problems
are symptomatic of real problems with the foundations of physical theories;
e.g. continuum mathematics. Elsewhere issues concerning the valid
applicability of Real Numbers, infinitesimals etc. have been raised (Guilty,
your honour). Surely with nothing better to do than twiddle thumbs these
matters, cry out for attention ?

> >>Modern Particle Physics is, in a literal sense incomprehensible.
>

> >Not to me. Not to the people who do it. If one doesn't understand
> >particle physics, one can either study it or decide it's not worth the
> >bother. If one chooses the latter course, complaining that it's
> >"incomprehensible" seems like an odd use of time.
>
> This is really the Emperor's new clothes. Why is only Feynman able to
> admit that he does not understand the fundamental laws of particle
> physics? There is a difference between being able to learn and work with
> the laws of quantum mechanics, and actually understanding on a
> theoretical basis why we have the laws of quantum mechanics. You have to
> recognise that the laws of quantum mechanics are not understood to
> recognise that this the major problem of our era. Not sweep them under
> the carpet and claim that they are understood. If so many mathematicians
> who claim to understand quantum mechanics, really understood quantum
> mechanics, why are they unable to explain themselves, and why do they
> never even try to explain themselves, and instead resort to disparaging
> the intelligence of anyone else.

Step forward and take a bow, Charles !

Terry Padden


Terry Padden - Chelwood Consulting

unread,
Dec 21, 1999, 3:00:00 AM12/21/99
to
Jim Carr <j...@ibms48.scri.fsu.edu> wrote in message
news:83gljj$moi$1...@news.fsu.edu...

> Sandhu G <san...@ch1.dot.net.in> wrote in message
> news:3849B2D4...@ch1.dot.net.in...
> }
> } The general opinion is that in all branches of applied
> } sciences, our progress and achievements have been tremendous.
>
> In article <1Kr24.5443$5K1....@newsfeeds.bigpond.com>

> "Terry Padden - Chelwood Consulting" <TC...@bigpond.com> writes:
>
> >The measure of science is comprehension not technology.
>
> And what we comprehend at the end of this century in comparison
> to what was understood at the start of it is a pretty clear
> measure of exceptional progress and major achievements.

I instanced the data on which I made my judgement. You have not; presumably
you meant to say "IMHO". Of course you probably understand a lot more than
I do.

On what do you base your assertion ? At the end of the 19th century it was a
common view amongst professional physicists that nearly everything was
understood; except for a couple of minor problems with ultra violet
catastrophes and the lack of variation in measured light speed. Their
opinions of their achievements are just as valid as your opinions of current
achievements.

In the 20th century, according to Feynmann no one understands QM; Wigner
admitted he did not; ( I realise the incoherents may disagree); there is
evident conflict between the two major physical theories; epistemology and
ontology have been replaced by Pure Math. Dark Matter (or is it Darth
Vader) will rescue cosmology. A Professor of Physics in a professional
paper points out that if ZPE is real the Solar system cannot be. Barbour's
recent book "The End of Time" confirms that physics professionals who do not
toe the party line are in professional danger.

So IHMO based on well known facts and the expressed views of relevant senior
professionals science has gone backwards in comprehension this century
compared to last. If this lack of comprehension is acknowledged, it is not
necessarily a bad thing. Recognition of ignorance is always a good starting
point.

You choose to assert otherwise. On what basis ? A list of technological
accomplishments is not an answer acceptable to me.

> >May I suggest that the recent
> >thread on SPR on " Instances of Zero " is a reasonably fair assessment of
> >our actual accomplishments. It seems we don't even know if Zero exists.
>

> An assessment based on a poorly posed question is not reasonable.

What is poorly posed about "Does Zero exist ?" I can think of no clearer or
more important question in physical science. We have had many eminent
scientists (e.g Hawking, Guth) claiming that the Universe is a quantum
fluctuation ('the ultimate free lunch' I believe was the phrase) about zero;
i.e. the universe came out of zero. To know what they mean we must first
have a clear view of zero.

Accepting (which I do) that the data I used for my assertion is flawed, what
experimental data isn't ? You offered no data.

> That thread was multi-posted; did you see all of the replies?

I am an amateur and new to this kind of stuff. I only read SPR (just
started on SMR), so I only read what was on SPR. What have I missed ?
Where ?

> we know that zero exists as a number.

I think if your we includes me, we are already in difficulties here. I
distinguish between a label and its contents; i.e I recognise the empty set.
I agree the designation / label of Zero exists; I use it regularly, hence my
interest in the question. What do you mean by a number ? A label ?

> Those three answers are sufficiently
> disjoint that it should be clear why I think the question is ill posed.

So what should the question(s) be ? I think it is an excellent question, to
which there should be a reasonably clear answer. It is the answers (I saw)
which are ill posed - which is the point of my posting - and hence even more
enjoyable. What do you think the answer(s) should be ?

Terry Padden

PS I have to agree that as stated it IS generally agreed that there has been
wonderful progress in science - I just disagree; sorry and all that.


Jim Carr

unread,
Dec 21, 1999, 3:00:00 AM12/21/99
to
Jonathan Thornburg wrote:

}
} Maynard Handley <hand...@ricochet.net> wrote:
} > there is not perceived to be a market for books
} > targetting say the intelligent physics junior giving an overview of the
} > math and physics ideas involved and how they connect together (eg how
} > these virtual particles one is always hearing about map to mathematics).
}
} Hmm. I won't comment on the academic book market in general, but here
} are a few physics books which I think count as counterexamples...
> [snip]

In article <385903EA...@udel.edu>

Maurice Barnhill <m...@udel.edu> writes:
>
>In some ways the list of counter-examples supports Maynard Handley's
>comments.

I tend to agree, and although I second "Thinking like a Physicist"
as a very entertaining book that spans a lot of physics, it is not
the integrative text asked for above. One comment, however:

>The Feynman books were well-edited transcripts of lectures
>to *freshmen*;

Caltech freshmen (and sophmores). Since Caltech freshmen tend to have
about the level of preparation of a typical junior at most universities,
their relevance for upper division students should not be too surprising.

>I expect that their turning out to be more relevant for
>juniors and graduate students than for freshmen was unanticipated by the
>publishers.

Also by the lecturer, if you read the preface. Reportedly he was
sometimes talking to the postdocs and grad students who started
showing up to hear Feynman.

--
James A. Carr <j...@scri.fsu.edu> | Commercial e-mail is _NOT_
http://www.scri.fsu.edu/~jac/ | desired to this or any address
Supercomputer Computations Res. Inst. | that resolves to my account
Florida State, Tallahassee FL 32306 | for any reason at any time.


john baez

unread,
Dec 21, 1999, 3:00:00 AM12/21/99
to
In article <jWubaJBh...@clef.demon.co.uk>,
Charles Francis <cha...@clef.demon.no.junk> wrote:

>In article <82k6ti$q50$1...@rosencrantz.stcloudstate.edu>, John Baez
><ba...@galaxy.ucr.edu> writes

>>This suggests that unless some surprising new development in fundamental
>>physics comes along, it's a good time to take a breather and let practical
>>technology - what we can actually *afford to build* - catch up with
>>what we can imagine building in principle.

>I think that is what we have been doing for the last fifty or more
>years.

I disagree. Until the turn of the 1980s, advances in particle
accelerator technology led to many surprising discoveries in particle
physics. Technology was leading theory. Since then, I'd say, our
theoretical understanding of particle physics has almost gone past
our current technological ability to test it.

(Of course there are exceptions, as mentioned in my previous post,
like the discovery of neutrino mass. The job of the experimentalist
is to find more such exceptions while theorists such as myself sit in
our chairs and play the pundit. But while I'm playing the pundit....)

Consider for example the discovery of the charmed quark (or more precisely
the J/psi meson, which is a bound state of a charm-anticharm pair) in 1974
by Ting and Richter. While Glashow, Iliopoulous and Maiaini had actually
predicted its existence in 1970 in order to eliminate unobserved particles
through what's now called the "GIM mechanism", I think its discovery came
as quite a surprise at the time.

Or consider the discovery of the tau lepton by Martin Perl in 1975.
As far as I know, nobody had theoretical reasons for predicting the
existence of a third generation of quarks and leptons, so this came
as a bolt out of the blue.

After the tau was found, one could naively extrapolate the third
generation and predict the existence of the tau neutino and two
quarks, but as far as I know, there isn't any really great theoretical
reason why this pattern *must* repeat itself exactly this way. (Of
course the anomalies must cancel, but I'd be surprised if the observed
generation structure is the only way.) So I think the discovery of
of the bottom quark in 1977 by Lederman was quite important news.
On the other hand, the discovery of the top quark in 1994/95 was in
a sense just a confirmation that the pattern was continuing. Similarly,
the gradually growing body of evidence for the existence and properties
of the tau neutrino has more the flavor of "confirmation of theory"
than "surprising discovery" - see:

http://www.fynu.ucl.ac.be/library/theses/gustaaf.brooijmans/node7.html

If I had to draw an imaginary line, I'd mark the turning point as the
discovery of the W and Z particles at CERN in the period 1983-1985.
Of course these had been predicted by Glashow, Salam and Weinberg,
but if you read their reminiscences, at the time even these creators
of the SU(2) x U(1) model didn't take it terribly seriously. Only
later did evidence for it grow to the point where it seemed like
the only sensible candidate for a theory of the electroweak interactions.
The discovery of the W and Z seems to me like the point at which the
Standard Model became truly "set in stone".

(Of course, now the cracks are showing up, as discussed in a previous
thread.)

Someone with a better background in accelerator design could tell
a nice story about how these discoveries were made possible by
specific advances in technology. Some of this story is obvious
even to me. For example, experiments that require sorting through
truly massive amounts of data were hard to do before we had good
computers. There have also been many important advances in our
ability to detect particles since the early days of the cloud chamber.

>>Sandhu wrote:

>>>Modern Particle Physics is, in a literal sense incomprehensible.

>>Not to me. Not to the people who do it. If one doesn't understand
>>particle physics, one can either study it or decide it's not worth the
>>bother. If one chooses the latter course, complaining that it's
>>"incomprehensible" seems like an odd use of time.

>This is really the Emperor's new clothes. Why is only Feynman able to
>admit that he does not understand the fundamental laws of particle
>physics? There is a difference between being able to learn and work with
>the laws of quantum mechanics, and actually understanding on a
>theoretical basis why we have the laws of quantum mechanics. You have to
>recognise that the laws of quantum mechanics are not understood to
>recognise that this the major problem of our era. Not sweep them under
>the carpet and claim that they are understood.

There's a huge gap between being "incomprehensible" and being
"understood" in some complete and final sense. Quantum mechanics
and particle physics sit somewhere in this gap. Indeed, I'd say
all science sits somewhere in this gap! I was not arguing that
we understand particle physics to the extent that we don't need to
think about it more. I was merely arguing that it's not "incomprehensible"!

>If so many mathematicians
>who claim to understand quantum mechanics, really understood quantum
>mechanics, why are they unable to explain themselves, and why do they
>never even try to explain themselves, and instead resort to disparaging
>the intelligence of anyone else.

I'm not sure who these evil mathematicians are. Perhaps you have
some specific culprits in mind, but most mathematicians I know regard
quantum mechanics - and indeed most physics! - as quite mysterious.
They are typically unafraid of the mathematical formalism, at least
after it's been made rigorous, but they usually admit to having a
poor understanding of the "physical meaning" of this formalism. Most
of them don't go around disparaging people's intelligence, either.

But I really think we should discuss physics, not the personal
behavior of the shadowy unnamed figures to whom you refer.
Personalizing the discussion tends to evoke more heat than light.


Kevin A. Scaldeferri

unread,
Dec 22, 1999, 3:00:00 AM12/22/99
to
In article <1999122121...@charity.ucr.edu>,

john baez <ba...@math.ucr.edu> wrote:
>
>Or consider the discovery of the tau lepton by Martin Perl in 1975.
>As far as I know, nobody had theoretical reasons for predicting the
>existence of a third generation of quarks and leptons, so this came
>as a bolt out of the blue.

That's not quite accurate. It had been pointed out (some time
earlier, mid sixties?) that CP violation could be explained by the
non-equality of weak and mass eigenstates, but only if there were at
least three generations. Of course, at the time quarks were pretty
far-out theoretical speculations and people were only beginning to
formulate what would become the Standard Model, so this prediction
wasn't taken too seriously, even less so that the prediction of the
charm.

>After the tau was found, one could naively extrapolate the third
>generation and predict the existence of the tau neutino and two
>quarks, but as far as I know, there isn't any really great theoretical
>reason why this pattern *must* repeat itself exactly this way.

I don't think the CKM mechanism will work if the generations don't
couple in the same ways. Perhaps you could still find some way to
generate the mixings, but it wouldn't have the same elegance.


--
======================================================================
Kevin Scaldeferri Calif. Institute of Technology
The INTJ's Prayer:
Lord keep me open to others' ideas, WRONG though they may be.


Charles Francis

unread,
Dec 22, 1999, 3:00:00 AM12/22/99
to

In article <1999122121...@charity.ucr.edu>, john baez
<ba...@math.ucr.edu> writes

>In article <jWubaJBh...@clef.demon.co.uk>,
>Charles Francis <cha...@clef.demon.no.junk> wrote:
>
>>In article <82k6ti$q50$1...@rosencrantz.stcloudstate.edu>, John Baez
>><ba...@galaxy.ucr.edu> writes
>
>>>This suggests that unless some surprising new development in fundamental
>>>physics comes along, it's a good time to take a breather and let practical
>>>technology - what we can actually *afford to build* - catch up with
>>>what we can imagine building in principle.
>
>>I think that is what we have been doing for the last fifty or more
>>years.
>
>I disagree. Until the turn of the 1980s, advances in particle
>accelerator technology led to many surprising discoveries in particle
>physics. Technology was leading theory. Since then, I'd say, our
>theoretical understanding of particle physics has almost gone past
>our current technological ability to test it.
>
<snip>

I don't disagree with anything you say about the discovery of new
particles, but the fundamental problem of twentieth century science is
the comprehensibility of quantum mechanics, and that has been largely
swept under the carpet for fifty years.

Let us agree on that, but can I ask you to be careful about what you say
about the comprehensibility of quantum mechanics. When people, such as
Sandhu, refer to it as incomprehensible they are generally not referring
to the mathematical formalism, but to the physical meaning.

Jim Carr

unread,
Dec 23, 1999, 3:00:00 AM12/23/99
to
In article <1Kr24.5443$5K1....@newsfeeds.bigpond.com>
"Terry Padden - Chelwood Consulting" <TC...@bigpond.com> writes:
>
> ... For some 30 years now the best HSG's have
>mostly opted for Law, Medicine, Vet Science, & Commerce MBA tracks for good
>reasons. It seems that Science is now generally for second rate minds.

What documentation do you have for that assertion about the choice
of field of study by college freshmen since 1969? The precipitous
drop in PhD production in physics in 1969 was as much a result of
the anomoly in previous years as anything. The mean rate after that
instability settled down remained an order of magnitude above what
it was before the war. Hardly an abandonment of the field.

Enrollment numbers at the Enormous State University I was at during
one of those decades showed a drop in the social "sciences" and an
increase in business and engineering. The distribution of winners of
full-ride academic scholarships among science and humanities majors
did not seem to change much at all. I suspect what the first-rate
students do is dictated as much by personality and interests as anything
external. Those driven by other goals might choose a major based on
where they think the money is rather than what they think is interesting.

Ralph E. Frost

unread,
Dec 24, 1999, 3:00:00 AM12/24/99
to
Jim Carr wrote:

> We know that zero exists as a number. It seems very likely that the
> resistance in a superconductor is zero. It has been understood for
> centuries that you cannot use experimental data to establish anything
> more than an upper limit on a value that might be zero because the
> experimental uncertainties (both statistical and systematic) do not
> allow you to do anything else. Those three answers are sufficiently

> disjoint that it should be clear why I think the question is ill posed.

So, you think resistance in a superconductor is an instance of zero?

It's the closest thing I've heard yet but I still think, in the ill formed
way I pose the ill posed question, what folks are trying to say [as in
your fine instance] is "with the way we define everything else this
superconductor stuff is really indicating something else and the regular
equation/approximation/concept doesn't hold."

I like your answer though because it is very clear that to reach this
"zero" one has to arrange conditions and take a bunch of
things to an extreme whereby different rules apply.

I guess you guys'd say the value went to zero, and I would say things
merged into to a different level of encapsulation/organization (where
resistance no longer exists).

--
Best regards,
Ralph E. Frost

http://www.dcwi.com/~refrost/index.htm ..Feeling is believing..


C. Cagle

unread,
Dec 26, 1999, 3:00:00 AM12/26/99
to
In article <1999122121...@charity.ucr.edu>, john baez
<ba...@math.ucr.edu> wrote:

> There's a huge gap between being "incomprehensible" and being
> "understood" in some complete and final sense. Quantum mechanics
> and particle physics sit somewhere in this gap. Indeed, I'd say
> all science sits somewhere in this gap! I was not arguing that
> we understand particle physics to the extent that we don't need to
> think about it more. I was merely arguing that it's not "incomprehensible"!

The only way you could say that with authority is if you believe you
have comprehended particle physics in its entirety. Can you make that
claim? If you haven't comprehended it yourself then you can certainly
argue that it is not incomprehensible but not with believable
authority.

Charles Cagle


Charles Francis

unread,
Dec 28, 1999, 3:00:00 AM12/28/99
to
In article <SVf84.1200$oJ5....@newsfeeds.bigpond.com>, Terry Padden -
Chelwood Consulting <TC...@bigpond.com> writes
>Charles Francis <cha...@clef.demon.co.uk> wrote in message
>news:83rs3h$bdb$1...@rosencrantz.stcloudstate.edu...

>
>> >They are typically unafraid of the mathematical formalism, at least
>> >after it's been made rigorous, but they usually admit to having a
>> >poor understanding of the "physical meaning" of this formalism. Most
>> >of them don't go around disparaging people's intelligence, either.
>
>> Let us agree on that, but can I ask you to be careful about what you say
>> about the comprehensibility of quantum mechanics. When people, such as
>> Sandhu, refer to it as incomprehensible they are generally not referring
>> to the mathematical formalism, but to the physical meaning.
>
>It is beyond dispute that the formalism works - to 12 places etc. If there
>is nothing wrong, or missing, or extraneous with the mathematical formalism,
>then the only issue is a linguistic one. The formalism cannot be properly
>translated into a meaningful physical interpretation using available
>acceptable physical language. Surely this is the basis of the Heisenberg
>view. May be our minds do have a limited ability to understand the universe
>when "Words fail us". If not, then either we need new words; or could it
>be, that maybe ...?
>
>No! The formalism is perfect. Yet Bell claimed to have found an error in
>Von Neumann's Foundations that had lain undetected for decades. Hmm.
>
Bell did not find an error in Von Neumann's foundations (or Dirac's
which are pretty much the same). He misunderstood them. Nor did he
originate the misunderstanding. He merely found a mathematical way of
describing some of the misunderstandings which have been built into
quantum mechanics, both in the Copenhagen interpretation, and in the
failure of the academic establishment to come to terms with the
foundations of quantum mechanics. Einstein demonstrated in the EPR
experiment that much of what was being said in the Copenhagen
interpretation did not make sense, Bell proved it. That had nothing to
do with Von Neumann's or Dirac's measurement theory approach to quantum
mechanics.

If you study the foundations carefully, you will find that there is
requirement in the laws of quantum mechanics for background space-time.
What ails fundamental research in physics is that almost no-one has
understood this element of the foundations of quantum mechanics, and
background space-time is commonly brought back in by field theorists.
Now we have 90% of physical theorists working on string theory and the
quantisation of space-time because they have not understood that space-
time was quantised seventy or more years ago.

This is largely the consequence of teaching undergraduates not to try
understand the foundations of quantum mechanics. Those who do are pushed
to the fringes and almost never integrate their approach with
relativity. Those who become relativistic throw away the foundations of
quantum mechanics and look at C*-algebras instead - although the study
of operators which do not operate on anything is manifestly ludicrous -
one would say overspeculative, but to be speculative at all there would
have to be a possibility that the structure could make physical sense -
whereas the idea of the exercise is to separate the maths from the
sense.

To resolve the problems it is necessary to go back to basics, but to
leave out the Schrodinger equation from the assumptions, and the
associated collapse of the wave function. Thus formulate a measurement
theory without the measurement problem. Then construct field operators
as operators within the structure - this is almost impossible within the
standard infinite dimensional structure, hence the motivation for
C*-algebras, but it is very straightforward on the finite dimensional
Hilbert space which is all that is necessary or required for a theory of
measurement.

(PS: some of my sources say that Hilbert space is infinite dimensional,
but I prefer this usage. Is that okay for the professional
mathematicians on the site?).

john baez

unread,
Dec 28, 1999, 3:00:00 AM12/28/99
to
In article <eZ$OwHAKG...@clef.demon.co.uk>,
Charles Francis <cha...@clef.demon.co.uk> wrote:

>Those who become relativistic throw away the foundations of
>quantum mechanics and look at C*-algebras instead - although the study
>of operators which do not operate on anything is manifestly ludicrous -
>one would say overspeculative, but to be speculative at all there would
>have to be a possibility that the structure could make physical sense -
>whereas the idea of the exercise is to separate the maths from the
>sense.

I think this is a quite inaccurate characterization of the C*-algebraic
approach to quantum theory. This approach was invented to elucidate
certain aspects of quantum theory that remain obscure in the Hilbert
space approach, and it's been very successful in clarifying issues
that arise in statistical mechanics, nonperturbative quantum field
theory and also quantum field theory on curved spacetime. I could
explain this in detail, but I get the feeling that you have strong
negative emotions about C*-algebras and are not interested in learning
more about them.

For those who *are* interested in seeing C*-algebras applied to
concrete physics problems, I'd suggest taking a look at Bratteli
and Robinson's "Operator algebras and quantum statistical mechanics",
Haag's book "Local quantum physics: fields, particles, algebras",
and also Wald's "Quantum field theory in curved spacetime and black
hole thermodynamics".

(I haven't actually checked to make sure the last book talks about
C*-algebras, but from what Toby Bartels has mentioned about this book,
I suspect it does.)


Toby Bartels

unread,
Dec 28, 1999, 3:00:00 AM12/28/99
to
Charles Francis <cha...@clef.demon.co.uk> wrote:

>If you study the foundations carefully, you will find that there is
>requirement in the laws of quantum mechanics for background space-time.

I don't see that as inherent in the foundations of QM itself.
The foundations of quantum field theory, yes, but not quantum physics.
In the Hilbert space formalism, for example, we say:
there is a Hilbert space H of pure states;
ever observable is a self adjoint linear operator on H;
the expectation value of yada yada yada ...:
nowhere do we refer to, say, Minkowski spacetime E(3,1).
Only when we decide to apply the formalism to a specific example do we say:
the particle is travelling through 3 dimensions of space,
giving rise to 3 commuting observables x, y, and z; etc.
Quantum field theory, OTOH, is a special case of quantum physics,
which makes further background assumptions about spacetime and so forth.

>... quantum mechanics and look at C*-algebras instead - although the study


>of operators which do not operate on anything is manifestly ludicrous -

Actually, elements of C* algebras operate on the C* algebra itself.
However, I agree that's not the *point* of what they are,
so I think it best to say "C* algebra element" instead of "operator".
Of course, I catch myself saying "observable" all the time,
which isn't right either, but at least *some* C* algebra elements
are observables (in the C* algebraic approach, that is).

>one would say overspeculative, but to be speculative at all there would
>have to be a possibility that the structure could make physical sense -
>whereas the idea of the exercise is to separate the maths from the
>sense.

You may believe that's the *result* of the exercise,
but I'm pretty certain that's not the *idea*.

>... finite dimensional Hilbert space ...

>(PS: some of my sources say that Hilbert space is infinite dimensional,
>but I prefer this usage. Is that okay for the professional
>mathematicians on the site?).

Anyone who says a Hilbert space *must* be infinite dimensional should be shot.
Every finite dimensional inner product space is a perfectly good Hilbert space.
Nevertheless, the point of the concept of a Hilbert space
is that it generalises finite dimensional inner product spaces
to infinite dimensions in a certain way.
So, I think it would get across *your* point better
to say "finite dimensional inner product space"
rather than "finite dimensional Hilbert space".
However, that's a matter of nuance;
the technical meanings of the two phrases are identical.


-- Toby
to...@ugcs.caltech.edu


Charles Torre

unread,
Dec 29, 1999, 3:00:00 AM12/29/99
to
ba...@math.ucr.edu (john baez) writes:
>
> For those who *are* interested in seeing C*-algebras applied to
> concrete physics problems, I'd suggest taking a look at Bratteli
> and Robinson's "Operator algebras and quantum statistical mechanics",
> Haag's book "Local quantum physics: fields, particles, algebras",
> and also Wald's "Quantum field theory in curved spacetime and black
> hole thermodynamics".
>
> (I haven't actually checked to make sure the last book talks about
> C*-algebras, but from what Toby Bartels has mentioned about this book,
> I suspect it does.)
>

Actually, I find that Wald's book gives a relatively gentle
(if a bit brief) introduction to many of the basic ideas of
the algebraic approach to quantum field theory. I would say
that one of the main themes (if not THE theme) of the book
is that the principal problem for quantum field theory in
curved spacetime is that there is no preferred
representation of the canonical commutation relations
(CCR). Wald then goes on to show how this problem is
largely eliminated by an algebraic approach.

Since this thread is, in part, about the efficacy of the C*
algebra approach I guess I can generate a bit of shameless
self-promotion... I started to appreciate the power of the
C* approach when Madhavan Varadarajan and I were
considering the time evolution of the state of a free
quantum field between arbitrary Cauchy surfaces in
Minkowski spacetime. We were able to show that this time
evolution could not be unitarily implemented in the
standard Fock representation of the CCR. On the other hand,
this kind of time evolution appears to be easily described
within the C* algebraic framework (although we do not make
very deep use of this formalism). Details appear in
Class.Quant.Grav. 16 (1999) 2651-2668.

Charles Torre
to...@cc.usu.edu


(Greg Weeks)

unread,
Dec 29, 1999, 3:00:00 AM12/29/99
to
john baez (ba...@math.ucr.edu) wrote:
: This approach was invented to elucidate certain aspects of quantum theory

: that remain obscure in the Hilbert space approach

I also like to think that they *might* have been invented as follows:

After Heisenberg's initial results in matrix mechanics, Dirac and company
promptly abstracted the following equations for the "observables" of a
quantum theory:

the usual equation of motion
&
Q(t)*P(t) - P(t)*Q(t) = i*hbar

Unfortunately, we don't know what these equations mean or what they equate.
Nevertheless, these equations seem to be the essence of quantum mechanics,
so we really want to make them mean something. The second equation
motivates us to view the observables as elements of a *-algebra. But we
don't have either a definite set of elements or a topology.

[The next paragraph is hand-wavy. I'd like to think that this could be
improved upon.]

Our problem is that our algebra "tastes" like an algebra of unbounded
operators, which is mathematically awkward. Lets imagine that we have
included bounded functions of Q(t) and P(t) and focus our attention
exclusively on them. Then the "spectrum" of an observable A -- ie, the set
of all numbers n such that A - n has no inverse -- will be bounded. The
norm of an observable can be defined as the smallest radius in the complex
plane that includes the entire spectrum. And the set of elements can be
made definite by completing the algebra in the norm.

Whew, hand-waving indeed! Still, I do believe that for every set of
equations such as the above, there is a unique C*-algebra that embodies the
equations. This vague belief seems to be implicit in the literature that
I've seen.


Greg


Toby Bartels

unread,
Dec 29, 1999, 3:00:00 AM12/29/99
to
John Baez <ba...@math.ucr.edu> wrote at long last:

>and also Wald's "Quantum field theory in curved spacetime and black
>hole thermodynamics".

>(I haven't actually checked to make sure the last book talks about
>C*-algebras, but from what Toby Bartels has mentioned about this book,
>I suspect it does.)

It introduces the formalism in section 4.5 and uses it thereafter.
It doesn't really do the introduction to my taste,
but that's because it wants to hurry up and derive results,
so that may be just what you're after.


-- Toby
to...@ugcs.caltech.edu


john baez

unread,
Dec 29, 1999, 3:00:00 AM12/29/99
to
In article <849kaq$p...@gap.cco.caltech.edu>,
Toby Bartels <to...@ugcs.caltech.edu> wrote:

>Charles Francis <cha...@clef.demon.co.uk> wrote:

>>... finite dimensional Hilbert space ...

>>(PS: some of my sources say that Hilbert space is infinite dimensional,
>>but I prefer this usage. Is that okay for the professional
>>mathematicians on the site?).

Yes indeed. I believe the term "Hilbert space" was originally defined
by von Neumann to mean a complete complex inner product space of
*countably infinite* dimension - excluding both the finite-dimensional
and the uncountably-infinite-dimensional cases. With this old definition,
all Hilbert spaces were isomorphic, hence references in the older
literature to "Hilbert space" as if it were a single thing rather
than a kind of thing. For example: Halmos' famous line "Gentlemen,
there's a lot of room in Hilbert space!" But over the years this
overly restrictive definition of Hilbert space has slowly lost favor,
and now most mathematicians use it to mean any complete complex inner
product space, regardless of dimension.

>Anyone who says a Hilbert space *must* be infinite dimensional should
>be shot.

I guess I'm glad *you're* not king of the universe. But at least I
would not personally suffer from this aspect of your regime! If you
want the category of Hilbert spaces to have properties similar to
that of vector spaces, you really want to allow Hilbert spaces of
any dimension - that's the ultimate reason why the old, overly
restrictive definition gave way to the new one.


C. Cagle

unread,
Dec 30, 1999, 3:00:00 AM12/30/99
to
In article <83rs3h$bdb$1...@rosencrantz.stcloudstate.edu>, Charles Francis
<cha...@clef.demon.co.uk> wrote:

> In article <1999122121...@charity.ucr.edu>, john baez

> <ba...@math.ucr.edu> writes

> >I disagree. Until the turn of the 1980s, advances in particle
> >accelerator technology led to many surprising discoveries in particle
> >physics. Technology was leading theory. Since then, I'd say, our
> >theoretical understanding of particle physics has almost gone past

> >our current technological ability to test it.
> >
> <snip>
>

> I don't disagree with anything you say about the discovery of new
> particles, but the fundamental problem of twentieth century science is
> the comprehensibility of quantum mechanics, and that has been largely
> swept under the carpet for fifty years.

> Let us agree on that, but can I ask you to be careful about what you say
> about the comprehensibility of quantum mechanics. When people, such as
> Sandhu, refer to it as incomprehensible they are generally not referring
> to the mathematical formalism, but to the physical meaning.

Ultimately, it is the physics that we are after. Detailed concepts
which allow us to comprehend the universe as it is (a suspected
objective reality that most people feel certain lies behind how it
appears to be) should be far more useful in the beginning than precise
equations which are not intellectually connected in a meaningful manner
with that supposed underlying physics. This argument has raged on and
on for a century at least where formalism reigned supreme and intuitive
comprehension took a back seat.

Peter Guthrie Tait (1837-1901) more than one hundred years ago, when
reviewing Poincare's "Thermodynamique", wrote:

"Some forty years ago, in a certain mathematical circle at Cambridge,
men were wont to deplore the necessity of introducing words at all in a
physico-mathematical textbook: the unattainable, though closely
approachable Ideal being regarded as a world devoid of aught but
formulae! But one learns something in forty years, and accordingly the
surviving members of that circle now take very different view of the
matter. They have been taught alike by experience and by example to
regard mathematics, so far at least as physical enquiries are
concerned, as a mere auxiliary to thought...this is one of the great
truths which were enforced by Faraday's splendid career."

The evil mathematicians that John supposes are non-existent are
actually alive and well and some post to this and other newsgroups
quite regularly but who would it benefit to name them aloud when such
an attempt itself would trigger the automatic reaction against any post
which might be interpreted as a flame? The whole post gets tossed back
to the sender - so we ought to take Mr. Baez's query as a bit of
rhetoric and not a genuine request to name names. :-). So without
naming names I, at least for one, acknowledge that the elitism of
mathematicians which Tait was decrying is an everyday feature here in
the wonderland of usenet's science related newsgroups. Tait's comments
now ring quite hollow as the lessons supposedly learned by that circle
at Cambridge over the years did not become engraved in any stone to
which present day mathematical physicists pay homage.

Charles Cagle


Ralph E. Frost

unread,
Jan 2, 2000, 3:00:00 AM1/2/00
to
C. Cagle wrote:
>
> In article <83rs3h$bdb$1...@rosencrantz.stcloudstate.edu>, Charles Francis
> <cha...@clef.demon.co.uk> wrote:
>
> > In article <1999122121...@charity.ucr.edu>, john baez
> > <ba...@math.ucr.edu> writes
>
> > >I disagree. Until the turn of the 1980s, advances in particle
> > >accelerator technology led to many surprising discoveries in particle
> > >physics. Technology was leading theory. Since then, I'd say, our
> > >theoretical understanding of particle physics has almost gone past
> > >our current technological ability to test it.

..snipping all the good points CC and others made..


I have been thinking about the subject of this thread and would like to
offer the following points in the opening moments of the next semester.

IMO, a couple things ails fundamental research in physics.

First, I think most physics folks _know in their heart of hearts_ that
the modern day presentation is fundamentally and deeply flawed. They
may not admit it to anyone else, but they sort of know it in an aching,
gnawing way.

This type of apparent negativity is sort of intuited. It's a notion that
ripples through the subconscious mind. Other than folks like myself,
very few people speak anything about it. It's difficult to put into a
rational string of words. Also, let's face one truth at least, NO ONE
inside the physics linguistic community can speak out about it yet.
They would be ridiculed far more than any outsider would ever be. Plus,
when 'physics' is tightly confined to math expressions, modern-day
math-physics people can't see that there IS a problem. Once the
constraint of adhering to testable experience is removed, I suspect
there are a lot of degrees of freedom to fiddle with in the math realm.
And so folks do. It's only human nature.

In any event, everyone reading this post shares the intuitive
understanding that a century from now folks will look back at the huge
conflagration and convolution in present day islandic physics and be
real surprised that more folks couldn't or wouldn't or didn't dare try
to articulate to SHOUT the rudiments of the deeply rooted problem of
fundamental belief. But, guess what? People strongly resist changes
in paradigm, even when they intuitively (and perhaps intellectually and
emotionally) KNOW that only through a change in belief can the emergin
outcome be obtained. Trust me. I am no different. I am tossing about
in the a similar boat in the same tempestuous sea.

So, one thing is we sort of ~know~ that physics is due for a major
overhaul but physics folks aren't quite able to undergo the necessary
group therapy to undergo the transformation. Nose to the grindstone.
It's their job not to change.


That leads to an amplifying (worse) problem in science education. There
are many, many, many extremely good science educators and programs all
of which do an excellent job of turning out well-educated product. But
they continue to treat the vast and growing complexity and tectonic
divisions in science as "normal" rather than hammering their students to
prepare for, in fact to seek and embrace deep conceptual changes in the
fabric of how we see and describe reality. This means that folks are
still not being properly prepared in the various educational arenas.

Now, I don't mean folks' technical educations are lacking or that the
standard model and recent great successes in organizing the subparticle
stew with the help of nineteenth century group theory math is not a
major accomplishment. It is.

But is doesn't erase the general intuitive understanding that in
addition to all the strong pluses, the conceptual foundation of physics
is DUE for a deep and major rewrite. Folks have known about this
transition most of this century and we are still ill-prepared for it.
The upshot is science educators' hearts have not been truly in their
work. The students feel this lack and experience various discomforts as
a result. None of this subconscious awareness helps to spur masses of
sudents to charge ahead with great confidence in science.

It's sort of like a period to sharpen pencils and organize stuff prior
to the start of a totally NEW approach to describing reality. Yet, the
transition also _feels_ like a death of large proportions - a middle age
loss experienced and mourned in concert before during and after it
happens, yet renewing the entire system as the change proceeds. And
make matters worse for willful, control types, no one person knows of
the full extent of nature of the transformation. So the change is also
like a a child constricted in the birth canal. Fire, blood, death and
birth. Naturally flowing from mere uncertainty to the fullness of the
new unknown.

Of course, this slight difficulty is not a real ailment but a good and
healthy, natural transition. It's a time of deep transformation.
Still, most folks don't want to admit consciously that such a thing can
_EVER_ occur.

Yet we know it must.

Do I need to reiterate that I am in the same boat?

But the REAL ailment, which spins out of these subailments is a simple
lack of vision. The pitiful part of this is that there is room for such
vast improvements - in energy generation, transportion and some of the
more mundane life support systems. But lacking a vision; lacking
right thoughts, etc., while harboring a fear that at any moment the
entire scientific paradigm will shift from one foot to the other - folks
sit lackluster playing math riddles or shouting out SU(3) to those who
know the game.

The lack of vision is partly a cold-war induced hangover and partly due
to the unacknowledged intuition that somehow, the paradigm is going to
condense and will make sense again.

So that, in my opinion, is what 'ails' fundamental research in physics.
People know an important truth at the subconscious level. The truth,
initially feels at odds with "progress" and politically correct short
term gains and thus is partly suppressed. The resistance and
suppression damps hope and excitment. Vision evaporates. Few dare
speak for fear of fear.

Excessive fear and hopelessness, lack of vision and imagination, and
sadly enough, an educationally sanctioned attempt at disconnection of
world_3 artifacts from world_1. These all combine in a social system
that demands thinness, tight buttocks, stringent mathematical precision
and ever increasing quarterly reports. Plus, always smiling positive
faces, all around.


The good news is paradigmatic change of the type we face in science is
truly natural. It's rally no big deal. Yan beocmes yin. No amount of
politically correct policies and edicts can avert the natural tide of
change. Emergence moves from the unconscious up through the
subconscious to expresions within consciousness.

And still, this transition in paradigmatic belief in whatever specific
form(s) it shall take can and will ONLY occur one individual at a time!

That is why making appropriate and timely changes in science education
via Usenet is so vital.


Happy new year!

Toby Bartels

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
Mikko Stenlund <mste...@cc.hut.fi> wrote in part:

>So what's an _infinite_ dimensional vector space? A vector space that has an
>_infinite_ dimensional basis, right? Not quite: an infinite dimensional
>vector space does not necessarily admit a basis. Hence the definition: an
>infinite dimensional space is a space that doesn't have a _finite_
>dimensional basis.

>[Moderator's note: if you believe in the axiom of choice, every
>vector space has a basis. If this basis is infinite, we say
>the vector space is infinite-dimensional. - jb]

I think Mikko's version can be made sense of even using the axiom of choice.
A Hamel (sp?) basis is a set of vectors such that
no finite nontrivial linear combination of them equals 0
and every vector is a finite linear combination of them.
Using choice, you can prove that every vector space has a Hamel basis.
A Stenlund basis (what's the correct term?) is a set of vectors such that
no convergent nontrivial linear combination of them equals 0
and every vector is a convergent linear combination of them.
Using choice, you can prove that every Hilbert space has a Stenlund basis.
However, I believe there are incomplete inner product spaces
that lack Stenlund bases. They are still infinite dimensional.
(Indeed, in finite dimensions, the two concepts coincide.)


-- Toby
to...@ugcs.caltech.edu


Chris Hillman

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
On 30 Dec 1999, C. Cagle wrote:

> Ultimately, it is the physics that we are after. Detailed concepts
> which allow us to comprehend the universe as it is (a suspected
> objective reality that most people feel certain lies behind how it
> appears to be) should be far more useful in the beginning than precise
> equations which are not intellectually connected in a meaningful
> manner with that supposed underlying physics.
>
> This argument has raged on and on for a century at least where
> formalism reigned supreme and intuitive comprehension took a back
> seat.

Non-mathematicians may have a hard time believing this, but the fact is
that mathematicians are constantly searching for -intuitive understanding-
of the abstract "mathematical objects" they define and whose mathematical
properties they study. "Good theories" in mathematics -facilitate- such
intuitive understanding and "good theorems" frequently not only give
valuable insights into the "structure" or "nature" of some class of
"mathematical objects", but also yield additional -intuition- into the
"concrete nature" of the objects in this class. IOW, mathematicians often
study very abstract concepts (for example in the foundations or in
category theory), but good mathematicians always keep some simple but
nontrivial concrete examples of the abstract constructions they are
studying in mind. In short, mathematicians actually have -plenty- of
intuition, and deep insights always involve a penetrating mathematical
-intuition- concerning some abstract mathematical phenomenon.

As for "excessive formalism", you might be referring the Bourbaki
movement, or some schools of logic around the turn of the century, which
emphasized formal methods. If so, I think there is no question that the
Bourbaki movement was very beneficial in the early part of this century
since it unified and ultimately simplified vast tracts of contemporary
mathematics. In the second half of the century, people like Lawvere,
Conway, Atiyah and his students (to name but a few) have gained much
intuitive insight into these abstract constructions, and most
mathematicians have participated in some of a "backlash" against what many
now regard as the excessive formalism and often very dry expository style
of Bourbaki.



> "Some forty years ago, in a certain mathematical circle at Cambridge,
> men were wont to deplore the necessity of introducing words at all in
> a physico-mathematical textbook: the unattainable, though closely
> approachable Ideal being regarded as a world devoid of aught but
> formulae! But one learns something in forty years, and accordingly
> the surviving members of that circle now take very different view of
> the matter. They have been taught alike by experience and by example
> to regard mathematics, so far at least as physical enquiries are
> concerned, as a mere auxiliary to thought...this is one of the great
> truths which were enforced by Faraday's splendid career."

If I might reformulate what I think Tait was saying: in the mid nineteenth
century, some mathematicians and logicians, perhaps inspired by Boole and
Frege, tried to write in an extremely "formal" style, even trying to
deduce everything by entirely formal steps using some kind of
"substitution rules" and the like. One of the books on logic by Quine
gives a good modern example of this type of very formalistic writing.

Tait is complaining, I think, that this was taking formalism too far. In
fact, this style of writing was never common in mathematics, although it
can still be found in some schools of "logic" (which is now almost a
separate subject from mathematics in some ways). Most mathematicians in
the twentieth century, including Poincare at the turn of century, right
through Thurston (a contemporary mathematical genius who has recently
written a lucid article explaining his views of what mathematicians do and
how they do it, in response to the infamous claim in Scientific American
that "Math is dead"), have emphasized the important of natural language,
simple but nontrivial examples to motivate a theory, and many mathematical
papers are written very clearly.

Equations and mathematical symbols continue to play a crucial role in
mathematical writing, however, because used properly, they can be an
-invaluable- aid to thought. For example, try translating a problem in
high school algebra into natural language and solving it without using any
equations, letter symbols for variables and constants, or any formal
algebraic manipulations!

Last but not least, I think Tait was saying that mathematical thinking is
-invaluable- to physicists as an aid to thinking about abstract, general
concepts without getting confused. Indeed, I would define mathematics as
the art/science of precise thinking about abstract concepts without
getting confused, with the goal of eventually stating and proving precise
characterizations of the behavior or nature of a given well defined class
of abstract objects, or the range of possible phenomena of precisely
characterized types under precisely stated conditions.

IOW, mathematics is the best friend of anyone who wants to think about
anything subtle, abstract, or complex.

So it seems perfectly clear to me that Tait was -not- railing against "the
evils of mathematics", as you seem to think.



> The evil mathematicians that John supposes are non-existent are
> actually alive and well and some post to this and other newsgroups
> quite regularly but who would it benefit to name them aloud when such
> an attempt itself would trigger the automatic reaction against any post
> which might be interpreted as a flame?

True, your post would probably have been rejected if you had named me or
John, for instance, as an "evil mathematician". FWIW, I find it extremely
funny that anyone might think a mathematician who frequently posts
(admittedly rather challenging) expository messages or corrections of
misstatements by others is "evil", or that "mathematics" itself could be
evil or have harmful effect on physics, on science, or on humanity in
general. This is funny because it is so obvious that exactly the
-opposite- is true.

Mathematics and mathematicians are extremely -beneficial- to society in
general and science in particular. Attempts to compose expository papers
about successful and powerful mathematical ideas are -beneficial- to
students of physics as well as mathematics, and corrections of
misstatements are -beneficial- to students and interested laypeople. And
of course, ultimately all our wonderful modern technology, which makes our
lives so much more interesting, safe and pleasant, is ultimately based
upon the work of mathematicians, going right back to the Greek
trigonometers and the Babylonian algebraists who introduced fractions, and
Viete, the French cryptographer and mathematician who introduced modern
algebraic equations with letter symbols for variables and formal algebraic
manipulations, in short, high school algebra as it is taught today, and
Fourier, and Gauss, and Cauchy, and Riemann, and Weierstrass, and Lie, and
Hilbert, and Noether, and Weyl, and Banach, and Stone, and von Neumann and
Turing and Shannon, and so on and so forth.

> The whole post gets tossed back to the sender - so we ought to take
> Mr. Baez's query as a bit of rhetoric and not a genuine request to
> name names. :-). So without naming names I, at least for one,
> acknowledge that the elitism of mathematicians which Tait was decrying
> is an everyday feature here in the wonderland of usenet's science
> related newsgroups.

Modern mathematical physics -is- intellectually demanding, no doubt about
that. But you should be aware that many leading mathematicians, such as
Atiyah and (recently) Thurston, have worked very hard to try to make some
very difficult mathematics as clear as possible, by writing careful
expositions motivated by specific concrete simple but nontrivial examples
of their theories, by providing intuitive motivation for definitions, and
by striving to make the statement and proof of the theorems as clear as
possible.

A good example is the recent book by Lawvere and Schanuel which attempts
to explain the most important concepts of category theory in a way which
steadily develops the intuition of beginning students for some very
abstract concepts.

> Tait's comments now ring quite hollow as the lessons supposedly
> learned by that circle at Cambridge over the years did not become
> engraved in any stone to which present day mathematical physicists pay
> homage.

Again, I would cite the recent article by Carlo Rovelli on quantum gravity
as a good example of a serious attempt by a leading researcher to explain
some very abstract concepts in a simple, intuitive way, to provide a "big
picture" and to explain very clearly the motivation for the search for a
quantum theory of gravity.

Math and physics -have- grown ever more abstract, but the point which I
think Tait would appreciate if he were alive today is that, while these
abstract concepts are indeed much harder to master than the mathematical
physics of his day (e.g. multivariable calculus a la Gauss and Kelvin,
vector calculus a la Gibbs, multilinear algebra, complex analysis a la
Weierstrasse, and Fourier theory a la Dini et al.), they are -invaluable-
tools, aids for thinking clearly, as physicists confront ever more subtle
and complex phenomena.

Chris Hillman

Home Page: http://www.math.washington.edu/~hillman/personal.html


James Gibbons

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
Ralph E. Frost wrote:

>[..]

> Now, I don't mean folks' technical educations are lacking or that the
> standard model and recent great successes in organizing the subparticle
> stew with the help of nineteenth century group theory math is not a
> major accomplishment. It is.
>
> But is doesn't erase the general intuitive understanding that in
> addition to all the strong pluses, the conceptual foundation of physics
> is DUE for a deep and major rewrite. Folks have known about this

> transition most of this century and we are still ill-prepared for it..


IMHO I think that the mathematics needed to describe "reality" (major
paradigm shift) has not been discovered yet. Continuum physics doesn't
work at planck scales. We need a working quantum gravity theory.
There are also numerous questions that have no answers such as why
the fundamental constants are as they are, and where did this
universe come from? We also have no explaination of consciousness.
(are we perceiving reality as we should or is there something else
of wich this is mearly a shadow?) Why DOES mathematics and physics
coexist as they do? (the unreasonable effectiveness of mathematics).
I think eventually some (hybrid) concepts from mathematics AND
physics may eventually explain at least some of these questions.
But before that happens, I believe that mathematics must be developed
further (such as (n-)category theory and discrete structures).
But this is only my opinion...

Jim Gibbons


Vesselin G Gueorguiev

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
Charles Francis wrote:

> In article <84arcp$2...@news.dtc.hp.com>, (Greg Weeks)
> <we...@golden.dtc.hp.com> writes

[...]

> >Unfortunately, we don't know what these equations mean or what they equate.
> >Nevertheless, these equations seem to be the essence of quantum mechanics,
> >so we really want to make them mean something.

> If we eliminate Hilbert space, then we lose the meaning.

We don't eliminate the Hilbert space, it is in there just pick the right one.
Take for example the Lie algebra su(3), staying this way there is "no meaning"
to it, but consider the standard model and you get a lot of Hilbert spaces
and a lot of meaning. The quarks as related to the fundamental representation
of su(3), then baryons and so on and so on.... Finally, if we study the
exp(su(3)) which is the group SU(3), we find all possible representations
in the algebra of functions over SU(3), so the Hilbert space is there, but
needs to be interpreted in a usefully way.

> >The second equation
> >motivates us to view the observables as elements of a *-algebra. But we
> >don't have either a definite set of elements or a topology.
> >

> >Our problem is that our algebra "tastes" like an algebra of unbounded
> >operators, which is mathematically awkward.

> But if we use a finite dimensional Hilbert space, then the mathematical
> awkwardness vanishes.
>
> And if we go back to Dirac's formulation of qm as measurement theory, we
> really have no way to bring in infinite dimensions.

Something doesn't sounds right here. Dirac, as I know, have emphasized the
importance of certain unbounded operators (the momentum for example), and such
operators do live often in infinite dimensional spaces.


Vesselin G Gueorguiev

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
Greg Weeks wrote:

[...]

> Whew, hand-waving indeed! Still, I do believe that for every set of
> equations such as the above, there is a unique C*-algebra that embodies the
> equations. This vague belief seems to be implicit in the literature that
> I've seen.

Well, we can start with freely generated C*-algebra and use the our set
of equations to define the quotient C*-algebra we need.

John Baez

unread,
Jan 3, 2000, 3:00:00 AM1/3/00
to
In article <38702AE2...@phys.lsu.edu>,
Vesselin G Gueorguiev <vess...@baton.phys.lsu.edu> wrote:

>Charles Francis wrote:

>> If we eliminate Hilbert space, then we lose the meaning.

>We don't eliminate the Hilbert space, it is in there just pick the right one.
>Take for example the Lie algebra su(3), staying this way there is "no meaning"
>to it, but consider the standard model and you get a lot of Hilbert spaces
>and a lot of meaning. The quarks as related to the fundamental representation
>of su(3), then baryons and so on and so on.... Finally, if we study the
>exp(su(3)) which is the group SU(3), we find all possible representations
>in the algebra of functions over SU(3), so the Hilbert space is there, but

>needs to be interpreted in a useful way.

Right. Excellent explanation! Historically, among the first groups
to be studied were "permutation groups" - groups acting on finite sets.
Only later did people invent "abstract groups", because the concept of
a group of symmetries OF SOMETHING is easier to master than the concept
of a group "in itself", which may be the group of symmetries of any number
of different things. In the long run, it turned out to be very useful
to separate the concept of a group and its action on a set, because the
same groups keep showing up in different contexts, acting on different
sets. But in nature, the groups we first encounter are groups acting on
sets.

People also studied groups acting on vector space - i.e., group
representations - before inventing the notion of an abstract group.
But eventually it became clear that we really want to understand
all different representations of a given abstract group. And
in modern particle physics, we often start with an abstract group;
its different representations then correspond to different particles
that might appear in a theory with this group of symmetries.

The same trend is the origin of C*-algebras. We start by thinking
of C*-algebras of operators on a fixed Hilbert space; then we realize
that for various reasons we want to treat the Hilbert space as
variable while keeping the abstract C*-algebra fixed. Basically,
all these reasons boil down what Vesselin said: we want the freedom
to pick the "right" Hilbert space for a given problem.

Given the general trend at work here, it's perhaps not surprising
that the situation for groups is really a special case of the situation
for C*-algebras. More precisely, if you hand me a group G, I can cook
up a C*-algebra C[G] with one unitary element [g] for each element g of
G, with multiplication and the *-structure defined in the obvious way:

[g] [h] = [gh]

[g]* = [g^{-1}]

Then unitary representations of G give representations of C[G]
and vice versa. This is a special case of a trick Vesselin mentioned
in another post: namely, defining a C*-algebra by generators and
relations. In short, we've got ourselves a cute little functor
from the category of groups to the category of C*-algebras. Given
that groups are all about *symmetry* while C*-algebras are all about
*quantum-mechanics*, it's nice to have this functor from groups to
C*-algebras!

(A number of variations on this trick are popular when G is a
topological group, e.g. a Lie group.)

>Dirac, as I know, have emphasized the importance of certain
>unbounded operators (the momentum for example), and such
>operators do live often in infinite dimensional spaces.

Yeah, here's a cute little puzzle: show there are no operators
P and Q on a finite-dimensional vector space satisfying

PQ - QP = i

The proof is easy when you hit upon it.

Btw, we need to do a certain amount of massaging to fit
unbounded operators into the C*-algebraic format, but it's
not terribly hard.

Charles Francis

unread,
Jan 4, 2000, 3:00:00 AM1/4/00
to
In article <84s3e6$p...@charity.ucr.edu>, John Baez <ba...@galaxy.ucr.edu>
writes

I do not dispute the historical value of SU(3), or the value of a method
which goes from studying group properties to finding representations.
But these days we can feel pretty comfortable with the idea of quarks,
and we make the subject much harder for students by following the
historical argument, rather than simply building the model from quarks.

>The same trend is the origin of C*-algebras. We start by thinking
>of C*-algebras of operators on a fixed Hilbert space; then we realize
>that for various reasons we want to treat the Hilbert space as
>variable while keeping the abstract C*-algebra fixed. Basically,
>all these reasons boil down what Vesselin said: we want the freedom
>to pick the "right" Hilbert space for a given problem.

Whereas I want to construct the right Hilbert space solely from
properties established empirically from a theory of measurement, and
show that it results in the right C*-algebra.


>
>
>>Dirac, as I know, have emphasized the importance of certain
>>unbounded operators (the momentum for example), and such
>>operators do live often in infinite dimensional spaces.
>

I regard Dirac as one of the four greatest theoretical physicists of all
time, but that does not mean I think that everything he said was
perfect. We don't need infinite dimensions to define momentum space. Let
{|x>} be the basis of a Hilbert space of dimension n.

For each p in the unit circle, let

|p> = Sum_x e^-ixp |x>

so
<x|p> = e^-ipx

Then (ignoring constants)

|x> = Integral_dp e^ixp |p>

The momentum operator can be written

P = Integral_dp |p> p <p|

= Sum_x Sum_y Integral_dp |y><y|p> p <p|x><x|

= Sum_x Sum_y Integral_dp |y> (-id/dx) e^-ip(y-x) <x|

= Sum_x |x> (-id/dx) <x|

Which seems fine to me. Thus we can define the derivative -id/dx as an
operator from the momentum space expansion, even for finite dimensions.

>
> here's a cute little puzzle: show there are no operators
>P and Q on a finite-dimensional vector space satisfying
>
>PQ - QP = i
>
>The proof is easy when you hit upon it.
>

But is it interesting? I take it you are thinking of a canonical
quantisation procedures. Not something I like to think about. I tend to
regard this in the light of the religious meaning of canon, a peculiar
rite or incantation performed by the lecturer at the beginning of a
course to ensure a suitable level of superstitious belief on the part of
all those students who persevere to the bitter end. We can develop the
whole of (discrete) quantum mechanics and (discrete) qed without once
quantising (or still worse) second quantising anything.


In my paper I construct multiparticle states, in the normal way as a
direct product, and generate a subspace of physical states from creation
and annihilation operators obeying the (anti)commution relation

[|x>,|y>]+- = <x|y> = kronecker delta_x_y

counter to your example. But of course, like everyone else, I cheat. I
use a finite direct product of N spaces, where N is a number larger than
any number of particles that I actually want to use. So, as in analysis,
I do not really have an infinity, but an approximation based on an
indefinitely large number. The difference as far as I can see is that I
make clear that I am cheating, whereas everyone else simply does not
notice.

Toby Bartels

unread,
Jan 5, 2000, 3:00:00 AM1/5/00
to
Vesselin G Gueorguiev <vess...@baton.phys.lsu.edu> wrote:

>Well, we can start with freely generated C*-algebra and use the our set
>of equations to define the quotient C*-algebra we need.

and others have written similar things.

I attended a lecture on operator algebras once
where it was revealed that things are a bit more complicated.
Basically, you can't get a C* algebra from a *set* of generators and relations.
For example, <p,q | p = p*, q = q*, pq - qp = i> doesn't work.
Instead, you get a C* algebra from a *normed set* of generators and relations.
<P,Q; |P| = |Q| = 1 | PP* = P*P = QQ* = Q*Q = 1, PQ - QP = I forget> works.
(Interpretations of examples: p and q are momentum and position,
which are unbounded, while P and Q are the exponentiated Weyl operators.)
Also note that the norm is allowed to shrink when you form the quotient.
(In particular, operators that were nonzero may become zero.)
Ultimately, this can be explained by the the lack of a left adjoint
for the forgetful functor from C* algebras to sets,
along with the existence of a left adjoint for the forgetful functor
from C* algebras to normed sets (whose morphisms are allowed to shrink norms).
That's what the lectururer said, anyway.


-- Toby
to...@ugcs.caltech.edu


(Greg Weeks)

unread,
Jan 6, 2000, 3:00:00 AM1/6/00
to
(Greg Weeks) (we...@golden.dtc.hp.com) wrote:
: John Baez (ba...@galaxy.ucr.edu) wrote:
: : The same trend is the origin of C*-algebras. We start by thinking

: : of C*-algebras of operators on a fixed Hilbert space;

: In a way this isn't true. Dirac submitted a paper in January 1926
: describing an abstract *-algebra of "q-numbers".

I was wrong here. The q-numbers were abstractions of matrix operands,
which are essentially Hilbert space elements. Nevertheless, I think it is
worth noting that quantum mechanics was formulated algebraically within
months of Heisenberg's original paper.


Greg


Doug B Sweetser

unread,
Jan 6, 2000, 3:00:00 AM1/6/00
to
Hello John:

I don't get the puzzle

----
Yeah, here's a cute little puzzle: show there are no operators


P and Q on a finite-dimensional vector space satisfying

PQ - QP = i

The proof is easy when you hit upon it.

----

I went to EDM2, 36G to get the definition:

C*-algebras A Banach *-algebra A satisfying ||x* x|| = ||x||^2 for all
x element of A is called a C*-algebra

Here is how I defined the norm ||q|| for quaternions

||q|| = q* q = (t^2 + x^2 + y^2 + z^2, 0, 0, 0)

Quaternions that use the Euclidean product (q* q') are a
finite-dimensional vector space, which is also a topological algebraic
field (i.e. it is equipped to do more than just add and be multiply by a
scalar).

Given a state made up of a bunch of quaternions,

phi = sum from n=0 to m (tn, xn, yn, zn)

We can rewrite this using an exponential.

phi = |phi| exp(scalar(phi)/|phi| vector(phi)/|vector(phi)|)

To make this a little less awkward, without a loss of generality, let
||phi|| = 1, and define the 3-vector I = vector(phi)/|vector(phi)|.

exp(phi) = exp(scalar(phi) I)

Represent the scalar(phi) using the scalar Euclidean product of two
quaternions A and B.

for each n
scalar(phi) = scalar(((at, ax, ay, az)* (bt, bx, by, bz) +
(bt, bx, by, bz)* (at, ax, ay, az))/2)

= (at bt + ax bx + ay by + az bz, 0, 0, 0)

Define one quaternion operator Ax that acts on phi by multiplying phi
by the scalar axn (the n implies a sum from n=0 to m):

A phi = axn phi

Define another quaternion operator Bx that acts on phi by multiplying
phi by the scalar bxn. This time, use a derivative of phi
with respect to ax in the direction of -I to get the real value bxn:

B = -I d/daxn

B phi = -I d phi/dax = -I bxn I exp(scalar(phi) I)

= bxn phi

Calculate the commutator:

[A, B] phi = AB phi - BA phi

= axn (-I) d/daxn phi - I d/daxn axn phi

= axn (-I) d phi/daxn - I daxn/daxn phi - axn (-I) d phi/daxn

= -I phi

Therefore

[A, B] = -I

Since I is an arbitrary 3-vector, it could be in the direction of i hat.


Could someone point to a problem in this counter-example? It is
absolutely central to my attempt to do quantum mechanics with
quaternions. Perhaps it is the extra properties of quaternions that
make this work.

Thanks,
doug swee...@world.com


Aaron Bergman

unread,
Jan 6, 2000, 3:00:00 AM1/6/00
to
In article <FnvL2...@world.std.com>, Doug B Sweetser wrote:
>Hello John:
>
>I don't get the puzzle
>
>----
>Yeah, here's a cute little puzzle: show there are no operators
>P and Q on a finite-dimensional vector space satisfying
>
>PQ - QP = i
>
>The proof is easy when you hit upon it.
>----
>
[Much snippage]

>Define one quaternion operator Ax that acts on phi by multiplying phi
>by the scalar axn (the n implies a sum from n=0 to m):
>
> A phi = axn phi
>
>Define another quaternion operator Bx that acts on phi by multiplying
>phi by the scalar bxn. This time, use a derivative of phi
>with respect to ax in the direction of -I to get the real value bxn:
>
> B = -I d/daxn
>
> B phi = -I d phi/dax = -I bxn I exp(scalar(phi) I)
>
> = bxn phi

These aren't operators on a finite dimensional space. They
are operators on an infinite dimensional space -- a space
of functions.

The solution to John's puzzle is:

take the trace of both sides.

Aaron
--
Aaron Bergman
<http://www.princeton.edu/~abergman/>


John Baez

unread,
Jan 6, 2000, 3:00:00 AM1/6/00
to
John Baez wrote:

>Yeah, here's a cute little puzzle: show there are no operators
>P and Q on a finite-dimensional vector space satisfying
>
>PQ - QP = i
>
>The proof is easy when you hit upon it.

In article <FnvL2...@world.std.com>,
Doug B Sweetser <uunet!world!swee...@ncar.UCAR.EDU> wrote:

>I don't get the puzzle.

You don't get what it means, or you don't get the answer?
Please let me know whether you want me to explain the puzzle
or to give away the answer. I'll be glad to do either.

By the way, if you don't like the phrase "operator on a
finite-dimensional vector space", feel free to replace it
with the phrase "matrix" - for the purposes of this puzzle,
it means almost the same thing.

>I went to EDM2, 36G to get the definition:
>
>C*-algebras: A Banach *-algebra A satisfying ||x* x|| = ||x||^2 for all

>x element of A is called a C*-algebra.

Of course then you gotta look up "Banach *-algebra" to get the rest
of the story.

Usually people talk about C*-algebras over the complex numbers,
but one can also study C*-algebras over the real numbers. The
quaternions, so dear to your heart, are a C*-algebra over the real
numbers, but not over the complex numbers.

It's pretty hard to prove that the only division algebras over
the real numbers are the reals, complexes and quaternions. On
the other hand, it's pretty easy to prove that the only division
algebras that are C*-algebras over the complex numbers are the
complex numbers themselves.

John Baez

unread,
Jan 7, 2000, 3:00:00 AM1/7/00
to
In article <38702BC6...@phys.lsu.edu>,

Vesselin G Gueorguiev <vess...@baton.phys.lsu.edu> wrote:
>Greg Weeks wrote:

>> Whew, hand-waving indeed! Still, I do believe that for every set of
>> equations such as the above, there is a unique C*-algebra that embodies the
>> equations. This vague belief seems to be implicit in the literature that
>> I've seen.

>Well, we can start with freely generated C*-algebra and use the our set

>of equations to define the quotient C*-algebra we need.

You gotta be a bit careful here: there's no such thing as "freely
generated C*-algebra". In particular, there's no "free C*-algebra
A on one generator x" with the property you might want, namely that
for C*-algebra A' generated by one element x' there exists a unique
*-homomorphism

f: A -> A'

with

f(x) = x'.

(By the way, a "*-homomorphism" is defined to be a linear oeprator
that preserves the product, unit and *-structure.)

To see this, note that if f is a *-homomorphism then ||f(x)||
is less than or equal to ||x||, so if ||x'|| is greater than
||x||, there can't be a *-homomorphism f with f(x) = x'.

For this reason, it ain't surprising that there's no C*-algebra
containing self-adjoint elements P and Q satisfying the Heisenberg
commutation realtions.

PQ - QP = iI

where I is the identity. In fact, there's a theorem saying that
there are no *bounded* self-adjoint operators P, Q on a Hilbert space
satisfying the above commutation relation, and there's another theorem
saying that every C*-algebra is *-isomorphic to a C*-algebra of bounded
operators on a Hilbert space.

The usual operators P and Q in quantum mechanics are unbounded.
Unbounded operators can be a real nuisance! This is one reason
why people use the Weyl rather than Heisenberg commutation relations
when doing quantum mechanics in a rigorous way. You *can* define
C*-algebras using generators and relations if the relations imply
that the generators are bounded - for example, if they say that
they're unitary. The Weyl commutation relations has this property
and this makes the Weyl relations better when you're doing quantum
mechanics C*-algebraically.

Vesselin G Gueorguiev

unread,
Jan 7, 2000, 3:00:00 AM1/7/00
to

Charles Francis wrote:

[...]

> I regard Dirac as one of the four greatest theoretical physicists of all time,

my other three physicists are Newton, Einstein & Feynman what about yours?

> but that does not mean I think that everything he said was perfect.

I doubt that there would ever be some one that would say perfect things
all the way :)

> We don't need infinite dimensions to define momentum space. Let
> {|x>} be the basis of a Hilbert space of dimension n.
>
> For each p in the unit circle, let

I assume this means that p is in [0,2pi]. Is that what you want?
I presume the set {x} is a sub-set of the integer numbers.

>
> |p> = Sum_x e^-ixp |x>

fine, we can think of this as point on n-dim torus :)



> so
> <x|p> = e^-ipx
>

that is how we project the T^n to n different S^1.

> Then (ignoring constants)
>
> |x> = Integral_dp e^ixp |p>
>
> The momentum operator can be written
>
> P = Integral_dp |p> p <p|
>
> = Sum_x Sum_y Integral_dp |y><y|p> p <p|x><x|
>
> = Sum_x Sum_y Integral_dp |y> (-id/dx) e^-ip(y-x) <x|
>
> = Sum_x |x> (-id/dx) <x|
>
> Which seems fine to me. Thus we can define the derivative -id/dx as an
> operator from the momentum space expansion, even for finite dimensions.

There are two things I don't like in this: we have to use properties of
d/dx as defined for continues variable x when we assume from the beginning
that {x} is a discrete set. Why would the momentum p be bounded?

[...]

John Baez

unread,
Jan 7, 2000, 3:00:00 AM1/7/00
to
In article <84t0bt$9...@gap.cco.caltech.edu>,
Toby Bartels <to...@ugcs.caltech.edu> wrote:

>I attended a lecture on operator algebras once
>where it was revealed that things are a bit more complicated.
>Basically, you can't get a C* algebra from a *set* of generators and relations.
>For example, <p,q | p = p*, q = q*, pq - qp = i> doesn't work.
>Instead, you get a C* algebra from a *normed set* of generators and relations.

I'd never quite thought of it this way, but it's a nice viewpoint.
Thank that lecturer for me!

><P,Q; |P| = |Q| = 1 | PP* = P*P = QQ* = Q*Q = 1, PQ - QP = I forget> works.
>(Interpretations of examples: p and q are momentum and position,
>which are unbounded, while P and Q are the exponentiated Weyl operators.)

Actually the Weyl operators, built by exponentiating the momentum and
position operators, satisfy commutation relations of the "Lie group"
rather than "Lie algebra" sort. In particular, if we set

P(s) = exp(ips)
Q(t) = exp(iqt)

where p and q are the momentum and position operators, then the right
commutation relations are:

P(s)Q(t) = exp(ist)Q(t)P(s)

So to define the Weyl C*-algebra, you really want something like this:

<P(s),Q(t); |P(s)| = |Q(t)| = 1 | P(s)P(s)* = P(s)*P(s) = Q(t)Q(t)* =
Q(t)*Q(t) = 1, P(s)Q(t) = exp(ist)Q(t)P(s)>

But note that these relations are redundant! Once we know P(s) and
Q(t) are unitary, we know their norm is one. So in this case we can
get away with an ordinary presentation by generators and relations:

<P(s),Q(t) | P(s)P(s)* = P(s)*P(s) = Q(t)Q(t)* = Q(t)*Q(t) = 1,
P(s)Q(t) = exp(ist)Q(t)P(s)>

This is how I usually define the Weyl C*-algebra.

Here are some other nice examples of C*-algebras that can be defined
using generators and relations. For this week's puzzle, I'll ask our
radio listeners to identify these C*-algebras! You can do this either
by giving their usual names, or, better yet, describing them in some
nice concrete way:

1) The free C*-algebra on a unitary. One generator U together with
the relations U U* = U* U = 1.

2) The free C*-algebra on an isometry. One generator U together with
the relations U* U = 1.

3) The free C*-algebra on two commuting unitaries. Two generators U
and V together with the relations U U* = U* U = 1, V V* = V* V = 1,
and UV = VU.

4) The free C*-algebra on two unitaries with specified commutation
relations. Two generators U and V together with the relations
U U* = U* U = 1, V V* = V* V = 1, and UV = qVU where q is a unit
complex number.

Hints and extra puzzles: C*-algebras 1) and 3) are commutative, so
we can think of them as consisting of continuous functions on certain
topological spaces (in fact, compact Hausdorff spaces). What are
these spaces?

There's an obvious homomorphism from C*-algebra 2) onto C*-algebra 1).
What is the meaning of this homomorphism? How does it tie into the
picture of C*-algebra 1) as consisting of functions on a space?

C*-algebra 4) has an evident relation to the Weyl C*-algebra. What's
going on here? In fact, this algebra was made famous by Alain Connes.
More recently, it has appeared in work on M-theory.


John Baez

unread,
Jan 7, 2000, 3:00:00 AM1/7/00
to
In article <85651q$t...@charity.ucr.edu>, John Baez <ba...@galaxy.ucr.edu> wrote:

>So to define the Weyl C*-algebra, you really want something like this:
>
><P(s),Q(t); |P(s)| = |Q(t)| = 1 | P(s)P(s)* = P(s)*P(s) = Q(t)Q(t)* =
> Q(t)*Q(t) = 1, P(s)Q(t) = exp(ist)Q(t)P(s)>

Ugh - I left some relations out! P(s) and Q(t) are supposed to be
one-parameter groups, so we also want:

P(s)P(s') = P(s+s')
Q(t)Q(t') = Q(t+t')

Martin Schlottmann

unread,
Jan 8, 2000, 3:00:00 AM1/8/00
to
John Baez wrote:
>
<snip>

>
> For this reason, it ain't surprising that there's no C*-algebra
> containing self-adjoint elements P and Q satisfying the Heisenberg
> commutation realtions.
>
> PQ - QP = iI
>
> where I is the identity. In fact, there's a theorem saying that
> there are no *bounded* self-adjoint operators P, Q on a Hilbert space
> satisfying the above commutation relation, and there's another theorem
> saying that every C*-algebra is *-isomorphic to a C*-algebra of bounded
> operators on a Hilbert space.
>
<snip>

More generally: For any two elements x,y of a
unital Banach algebra: xy - yx =/= I.
(See W. Rudin, "Functional Analysis", Thm 13.6)

--
Martin Schlottmann


Charles Francis

unread,
Jan 8, 2000, 3:00:00 AM1/8/00
to
In article <38752B98...@phys.lsu.edu>, thus spake Vesselin G
Gueorguiev <vess...@baton.phys.lsu.edu>
>
>
>Charles Francis wrote:
>
>[...]

>
>> I regard Dirac as one of the four greatest theoretical physicists of all
>time,
>
>my other three physicists are Newton, Einstein & Feynman what about yours?

Maxwell has to be one of the four. Feynman may well be the greatest
teacher of all, but I judge these on the extent and impact of their work
in developing fundamental law.

>> but that does not mean I think that everything he said was perfect.
>

>I doubt that there would ever be some one that would say perfect things
>all the way :)
>

>> We don't need infinite dimensions to define momentum space. Let
>> {|x>} be the basis of a Hilbert space of dimension n.
>>
>> For each p in the unit circle, let
>

>I assume this means that p is in [0,2pi]. Is that what you want?
>I presume the set {x} is a sub-set of the integer numbers.
>

Its more natural to use (-pi,pi], so there are negative values of
momentum. {x} is a a bounded interval of the integers, and can be [-n,n]


>>
>> |p> = Sum_x e^-ixp |x>
>

>fine, we can think of this as point on n-dim torus :)
>

>> so
>> <x|p> = e^-ipx
>>
>

>that is how we project the T^n to n different S^1.
>

>> Then (ignoring constants)
>>
>> |x> = Integral_dp e^ixp |p>
>>
>> The momentum operator can be written
>>
>> P = Integral_dp |p> p <p|
>>
>> = Sum_x Sum_y Integral_dp |y><y|p> p <p|x><x|
>>
>> = Sum_x Sum_y Integral_dp |y> (-id/dx) e^-ip(y-x) <x|
>>
>> = Sum_x |x> (-id/dx) <x|
>>
>> Which seems fine to me. Thus we can define the derivative -id/dx as an
>> operator from the momentum space expansion, even for finite dimensions.
>

>There are two things I don't like in this: we have to use properties of
>d/dx as defined for continues variable x when we assume from the beginning
>that {x} is a discrete set.


d/dx is just an operator - c.f. the way it is treated in gr. If you look
at the way I defined it, P is defined in momentum space, so there is no
issue. Then the association with the derivative of a continuous function
appears as a handy coincidence, not a foundation for the model.

>Why would the momentum p be bounded?
>

If you consider real measurement of momentum, it is bounded by the
dimensions of the measuring apparatus and the time taken to measure it.
Very often we think of the universe as composed of quantities which we
try to measure, but I consider that a measured quantity *is* the result
of the measurement to determine it.

John Baez

unread,
Jan 9, 2000, 3:00:00 AM1/9/00
to
Okay, here's some stuff about the importance of C*-algebras
in physics. I'll carefully avoid any sort of mathematical
details and focus on the basic physical ideas. Everything
will be nonrigorous, handwavy, and vague.

In quantum mechanics we often start by taking classical
observables and writing down some formulas which say that
actually these observables don't commute. The famous
example is of course

pq - qp = i hbar

but there are many others.

When we do this, what we're really doing is defining an
algebra - physicists would usually call it an "algebra
of observables". C*-algebras are a way of making this
precise.

Now, observables aren't much use without states. One way
to get ahold of states is to take your algebra of
observables and represent it as an algebra of operators
on a Hilbert space. Then unit vectors in your Hilbert
space represent states.

However, the same algebra of observables can have different
representations as operators on a Hilbert space.
In the example mentioned above, Heisenberg figured out
how to represent p and q as infinite-dimensional
matrices, while Schrodinger figure out how to represent
them as differential operators. In this particular
case the two representations are "equivalent", so
there's no serious conflict between Heisenberg's
matrix mechanics and Schrodinger's wave mechanics:
they are just two different viewpoints on the same
theory.

Eventually Stone and von Neumann proved a famous theorem
that explains why we don't have to worry about different
representations in this particular case. But later, people
found examples where the same algebra of observables can
have many fundamentally different representations!

The simplest example is the quantum field theory of a
noninteracting massive spin-zero particle. We have
observables analogous to the p's and q's in this theory, so
we get a C*-algebra of observables built from these p's and
q's. This algebra does not depend on the mass of the particle.
However, its representation as operators on Hilbert space
depends in a really important way on the particle's mass.
Different masses correspond to inequivalent representations!

What does this mean in practical terms? Well, if you take
the representation corresponding to a particle of mass m,
and try to write down the operator for the Hamiltonian of
a particle of mass m', you'll get a meaningless divergent
integral unless m' = m.

A similar problem occurs if we try to treat the vacuum state
for the theory with mass m' as a unit vector in the Hilbert
space for the particle of mass m. You can write down a
formula that's supposed to give the answer, but again, it
contains an integral which diverges unless m' = m.

In fact, many of the infinities in quantum field theory can
be understood this way: they come from assuming that all
representations of your algebra of observables are equivalent.
To avoid these infinities, it helps to stop making this
false assumption. It's not a panacea, but it's a necessary
start.

A great example is the case of spontaneous symmetry breaking.
Certain quantum field theories allow many different vacuum
states. Each different vacuum gives a different representation
of the C*-algebra of observables as operators on a Hilbert
space. Crudely speaking, each vacuum state lives in a
different Hilbert space: to get from one to the other
would require the creation or annihilation of infinitely many
particles, so you can't think of them as living in the same
Hilbert space.

Since quantum field theory is closely related to statistical
mechanics, it should come as no surprise that all these phenomena
have analogues in statistical mechanics as well. For this
reason C*-algebras are also useful in statistical mechanics.
Actually there's an interesting feedback loop here: ideas from
statistical mechanics have also had an important effect on the
theory of C*-algebras. The most famous example is the work of
Kubo, Martin and Schwinger on thermal equilibrium, which led
to the Tomita-Takesaki theory. But I said I wouldn't get too
mathematical so I can't talk about this!

I've already hinted that there's a close relation between
"vacuum states" and representations of C*-algebras. In fact
there's a theorem called the GNS construction which makes
this precise. But physically, what's going on here? Well,
it turns out that to define the concept of "vacuum" in a
quantum field theory we need more than the C*-algebra of
observables: we need to know the particular representation.
This becomes most dramatic in the case of quantum field
theory on curved spacetime - a warmup for full-fledged
quantum gravity. It turns out that in this setting, it's
a lot harder to get observers to agree on what counts as
the vacuum than it was in flat spacetime. The most dramatic
example is the Hawking radiation produced by a black hole.
You may have heard pop explanations of this in terms of
virtual particles, but if you dig into the math, you'll
find that it's really a bit more subtle than that. Crudely
speaking, it's caused by the fact that in curved spacetime,
different observers can have different notions of what counts
as the vacuum! And to really understand this, C*-algebras
are very handy.

Well, this post is so lacking in detail that I'm getting sort
of sick of writing it - the really fun part, to me, is how
the mathematics of C*-algebras makes the vague verbiage
above utterly precise and clear! So I'll stop and just
hope that what I'm saying gives a slight taste of what
C*-algebras are good for in physics.

Doug B Sweetser

unread,
Jan 9, 2000, 3:00:00 AM1/9/00
to
Hello John:

You wrote:

>Of course then you gotta look up "Banach *-algebra" to get the rest
>of the story.

True enough. Fortunately that is in the previous section, 36F

----
F. Banach Star Algebras

An involution in a Banach algebra R is an operator x->x* that satisfies
(1) (x + y)* = x* + y*
(2) (Lx)* = Loverbar x* (isn't ASCII naughty?)
(3) (xy)* = y* x*
(4) (x*)* = x
A Banach algebra with an involution is called a Banach *-algebra.
----

What I will propose in this post is that quaternions form a
non-commutative C*-algebra over the field of quaternions. This is
slightly different from the above definition. Since complex numbers
are a subalgebra of quaternions, if demonstrated, the quaternions
could form a non-commutative C*-algebra over the complex numbers. (I
think your comments applied to commutative C*-algebras, which is
probably what most people work with).

First an involution needs to be defined. It is the standard
conjugate.

q* = (t, X)* = (t, -X)

There is a problem with this though: using just q and q* does not
cover the 4 degrees of freedom quaternions represent. All that is
required is a little twisting to define two other involutions I call
the first and second conjugate.

(i q i)* = ((0, 1, 0, 0)(t, x, y, z)(0, 1, 0, 0))*
= (0, -1, 0, 0)(t, -x, -y, -z)(0, -1, 0, 0)
= (0, -1, 0, 0)(-x, -t, -z, y)
= (-t, x, -y, -z)
is defined to be q*1

(j q j)* = (-t, -x, y, -z) is defined to be q*2

Now we can express any quaternion function over R^4 as a quaternion
function over H^1 using q, q*, q*1, and q*2.

Let's see if an involution has been defined. Just for fun we will use
the first conjugate.

(1) (p + q)*1 = (pt + qt, px + qx, py + qy, pz + qz)*1
= (-pt - qt, px + qx, -py - qy, -pz - qz)
= (-pt, px, -py, -pz) + (-qt, qx, -qy, -qz)
= p*1 + q*1

(2) This is a non-commutative C*-algebra, so this will look like:

(Lx)* = x* Loverbar = x* L*

see (3) for details

(3) (p q)*1 = (pt qt - px.qx - py.qy - pz.qz,
pt qx + px qt + py qz - pz qy,
pt qy + py qt + pz qx - px qz,
pt qz + pz qt + px qy - py qx)*1

=(-pt qt + px.qx + py.qy + pz.qz,
pt qx + px qt + py qz - pz qy,
-pt qy - py qt - pz qx + px qz,
-pt qz - pz qt - px qy + py qx)
switch rows
=(-qt pt + qx.px + qy.py + qz.pz,
qx pt + qt px + qz py - qy pz,
-qy pt - qt py - qx pz + qz px,
-qz pt - qt pz - qy px + qx py)

= - q*1 p*1

or (p q p q)*1 = q*1 p*1 q*1 p*1

Note: (p q)* = q* p*, so the first conjugate twists
quaternions half as much as a quaternion. Cool :-)

(4) (q*1)*1 = (-t, x, -y -z)*1 = (t, x, y, z) = q

A definition is a definition. There is a perfect match on 1 and 4.
Rule 2 looks to me like a statement about whether constants commute
with involutions. Since quaternions do not commute, it is not
surprising that this fails to met this criteria.

This is a definition of something else that is closely related. It is
a bit more complicated than an involution, particularly due to the
properties of (3). The conjugate matches the standard rule perfectly,
but the first and second conjugates point the result in exactly the
opposite way. That sounds to me like what the issue of spin is all
about. I always knew that something with the mathematical properties
of spin had to play half the basic roles for a quaternion physics
model to be valid. Now I suspect that writing non-commutative
automorphic quaternion functions in terms of q, q*, q*1, q*2 and the
corresponding operators may do precisely that.

This insight into quaternion involutions and conjugates is less than
a day old, but I like it :-) With quaternions, I don't think there
is a difference between involutions and conjugates. Rule 2 and 3 get
merged, so there is no difference in the behavior of the field and
algebra of operators on the field.

More importantly, spin starts to seem reasonable. There is the
identity conjugate does nothing to signs. (pq)*identity = pq. A
conjugate keeps the scalar positive, flipping the signs of the
3-vector. (pq)* = q* p*. The order is changed for the conjugate.
Two more conjugates are needed to form a basis to span the 4 degrees
of freedom in a quaternion. Flip the scalar and two out of three
elements of the 3-vector. This uses (iqi)*, a conjugate and a
rotation together. It is the rotation, the work of i, that points
things in the opposite direction: (p q)*1 = -q*1 p*1 (the same is true
of the second conjugate, I checked). z and z* are enough to span the
complex numbers. q, q*, q*1, and q*2 are necessary to span the
quaternions. As John has indicated, quaternion functions over R^4 are
not that interesting. I agree. Working with automorphic quaternion
function requires something with properties analogous to spin. That
makes my brain skip a beat :-)


doug swee...@world.std.com
http://quaternions.com

ps. I bet I can do all this stuff


hrgr...@my-deja.com

unread,
Jan 9, 2000, 3:00:00 AM1/9/00
to
In article <8538uc$i...@charity.ucr.edu>,
ba...@galaxy.ucr.edu (John Baez) wrote:

> It's pretty hard to prove that the only division algebras over
> the real numbers are the reals, complexes and quaternions.

Long ago I saw a proof using heavy guns of algebraic topology
(cohomology operations) via vector fields on spheres. Did you refer to
that particular proof ?

Hans-Richard Grümm
(who slept for some time with Dixmier's book under the pillow, long
ago ... )


Sent via Deja.com http://www.deja.com/
Before you buy.

[Moderator's note: Unnecessary quoted text deleted. -MM]


(Greg Weeks)

unread,
Jan 9, 2000, 3:00:00 AM1/9/00
to
John Baez (ba...@galaxy.ucr.edu) wrote:
: The usual operators P and Q in quantum mechanics are unbounded.

: Unbounded operators can be a real nuisance! This is one reason
: why people use the Weyl rather than Heisenberg commutation relations
: when doing quantum mechanics in a rigorous way.

Can you similarly express the equations of motion using bounded operators?


Greg


Toby Bartels

unread,
Jan 9, 2000, 3:00:00 AM1/9/00
to
John Baez <ba...@galaxy.ucr.edu> wrote in part:

>Toby Bartels <to...@ugcs.caltech.edu> wrote:

>>I attended a lecture on operator algebras once
>>where it was revealed that things are a bit more complicated.

>>You can't get a C* algebra from a *set* of generators and relations;


>>you get a C* algebra from a *normed set* of generators and relations.

>I'd never quite thought of it this way, but it's a nice viewpoint.
>Thank that lecturer for me!

I don't remember who it was!
One of the operator algebra people at the University of Nebraska-Lincoln --
unless it was a guest, and then who knows?

><P(s),Q(t); |P(s)| = |Q(t)| = 1 | P(s)P(s)* = P(s)*P(s) = Q(t)Q(t)* =
> Q(t)*Q(t) = 1, P(s)Q(t) = exp(ist)Q(t)P(s)>

Thanks for correcting the presentation of the Weyl algebra
(as further amended by you in your followup).

>But note that these relations are redundant! Once we know P(s) and
>Q(t) are unitary, we know their norm is one. So in this case we can
>get away with an ordinary presentation by generators and relations:

><P(s),Q(t) | P(s)P(s)* = P(s)*P(s) = Q(t)Q(t)* = Q(t)*Q(t) = 1,
> P(s)Q(t) = exp(ist)Q(t)P(s)>

In general, you can do this if your generators are isometries.
This is because isometries are forced to have norm 1,
so you can assume that the norm of the generators is listed as 1.


-- Toby
to...@ugcs.caltech.edu


Charles Francis

unread,
Jan 9, 2000, 3:00:00 AM1/9/00
to
I do hope you do not see this as part of an emotional response. I do
think it is vital that we study fundamentals very carefully, and subject
them to every challenge. At the moment C*-algebras are, it seems to me,
where its at.


In article <859ja5$4...@charity.ucr.edu>, thus spake John Baez
<ba...@galaxy.ucr.edu>


>Okay, here's some stuff about the importance of C*-algebras
>in physics. I'll carefully avoid any sort of mathematical
>details and focus on the basic physical ideas. Everything
>will be nonrigorous, handwavy, and vague.
>

Good. If what you say is valid it should be possible to see how to put
it back into rigorous expression in ones head. I always find rigorous
expression written down stultifies the mind, whether I am writing it or
reading it.

>Eventually Stone and von Neumann proved a famous theorem
>that explains why we don't have to worry about different
>representations in this particular case. But later, people
>found examples where the same algebra of observables can
>have many fundamentally different representations!
>

I may need more clarification on this one.

>The simplest example is the quantum field theory of a
>noninteracting massive spin-zero particle.

Without too much hand waving it should be possible to see that the
notion is meaningless. Even withholding my reservations about the K-G
equation, how could something non-interacting have an observable state?
Nonetheless there may be something here about the states of particles
such as the W & the Z, which can exist only if they are not observable,
because they don't have a satisfactory representation in Hilbert space.
You could get me excited about C*-algebras yet!

<snip to save the moderator>

>What does this mean in practical terms? Well, if you take
>the representation corresponding to a particle of mass m,
>and try to write down the operator for the Hamiltonian of
>a particle of mass m', you'll get a meaningless divergent
>integral unless m' = m.

I have even more reservations about basing a model on a Hamiltonian.
Hamiltonians can be found by mathematical transformations on other
theories, but I don't think they can be regarded as fundamental because,
in themselves, they are descriptive of nothing - except for the
interaction Hamiltonian of qed which describes the emission or
absorption of a photon by an electron in terms of creation and
annihilation operators on multiparticle space. But if you interpret it
like that it is doubtful that it should be called a Hamiltonian at all.


>
>A similar problem occurs if we try to treat the vacuum state
>for the theory with mass m' as a unit vector in the Hilbert
>space for the particle of mass m. You can write down a
>formula that's supposed to give the answer, but again, it
>contains an integral which diverges unless m' = m.

I cannot grasp this. As far as I can see m is a fundamental property of
a particle. It seems as though you are describing one particle with a
Hilbert space designed for measured states of another. Its just not a
legitimate thing to do. You have first to form the direct product of
both spaces, and keep them distinct.


>
>In fact, many of the infinities in quantum field theory can
>be understood this way: they come from assuming that all
>representations of your algebra of observables are equivalent.
>To avoid these infinities, it helps to stop making this
>false assumption. It's not a panacea, but it's a necessary
>start.

It is certainly necessary to avoid some false assumption to avoid the
infinities. That must always have been clear to anyone competent in
mathematics. The question is which assumption is false? As you know I
claim to have solved this problem from very a different perspective.

>
>A great example is the case of spontaneous symmetry breaking.

To avoid the necessity of a rant about SSB, see my remarks on
Hamiltonians.

>Certain quantum field theories allow many different vacuum
>states.

Hand wavy, or fundamentally meaningless? How can the state of nothing be
anything other than the state of nothing? In my view any theory that
suggests otherwise has entered the realm of gibberish.

>
>Since quantum field theory is closely related to statistical
>mechanics,

...,<snip, to save the moderator>


> But I said I wouldn't get too
>mathematical so I can't talk about this!

and I don't know enough to respond, but I do believe that the Hilbert
space formalism is applicable to any measurement theory, not just
quantum theory, and can lead to improved classical error analysis, and
can be used for statistical mechanics. The Schrodinger equation may be
inappropriate, but Hilbert space should be fine, and take its algebra of
operators to statistical mechanics too.


>
>I've already hinted that there's a close relation between
>"vacuum states" and representations of C*-algebras. In fact
>there's a theorem called the GNS construction which makes
>this precise. But physically, what's going on here? Well,
>it turns out that to define the concept of "vacuum" in a
>quantum field theory we need more than the C*-algebra of
>observables: we need to know the particular representation.
>This becomes most dramatic in the case of quantum field
>theory on curved spacetime - a warmup for full-fledged
>quantum gravity. It turns out that in this setting, it's
>a lot harder to get observers to agree on what counts as
>the vacuum than it was in flat spacetime.

It seems in the present setting (i.e. the ng) is going to be pretty
difficult to get us to agree on what counts as the vacuum in Hilbert
space.

>The most dramatic
>example is the Hawking radiation produced by a black hole.
>You may have heard pop explanations of this in terms of
>virtual particles, but if you dig into the math, you'll
>find that it's really a bit more subtle than that. Crudely
>speaking, it's caused by the fact that in curved spacetime,
>different observers can have different notions of what counts
>as the vacuum! And to really understand this, C*-algebras
>are very handy.

As I say, I seem to have a very different way of understanding the
vacuum. I also seem to have a very different way of understanding
"virtual" particles. I maintain that they are just real particles which
haven't been understood properly by physicists who call them virtual.
And I do think I am a good enough mathematician to have made this claim
good in my papers. Okay, I would have to rewrite Hawking's work to make
proper sense of it, and I haven't done that. But it really is just an
exercise, not a problem in principle.


>
>Well, this post is so lacking in detail that I'm getting sort
>of sick of writing it

Are you sure that that is not just because you have your own nagging
worries?

> - the really fun part, to me, is how
>the mathematics of C*-algebras makes the vague verbiage
>above utterly precise and clear!

Whereas I find that's the boring bit!

>So I'll stop and just
>hope that what I'm saying gives a slight taste of what
>C*-algebras are good for in physics.
>

And I'll pick up and say what I think they may be good for. I already
maintain that qed can be treated perfectly well by operators on Hilbert
space. But in qed we are studying the effects of particle interactions
on states which are observable, at least in principle. If we can hold on
to that as the basis for understanding the next layer of fundamental
physical theory, then I can quite happily dream of studying the effects
of particle interactions in states which are not observable in
principle, and I can use "not observable in principle" as a perfectly
valid reason for throwing away Hilbert space, and retaining the bit that
(it is reasonable to anticipate) still works, namely the interaction
operators which presumably can be considered generators of a C* algebra.

Once you do that, then you provide a perfectly acceptable structure for
the weak interactions, without needing to go through all the Higgs stuff
& SSB stuff, and also you have a structure which may well work, not just
for Hawking radiation, but actually for the mechanisms inside a black
hole and at the big bang. So I can get quite excited by C* algebra after
all!

John Baez

unread,
Jan 9, 2000, 3:00:00 AM1/9/00
to
In article <855i8f$93c$1...@nnrp1.deja.com>, <hrgr...@my-deja.com> wrote:

>In article <8538uc$i...@charity.ucr.edu>,
> ba...@galaxy.ucr.edu (John Baez) wrote:

>> It's pretty hard to prove that the only division algebras over
>> the real numbers are the reals, complexes and quaternions.

>Long ago I saw a proof using heavy guns of algebraic topology
>(cohomology operations) via vector fields on spheres. Did you refer to
>that particular proof ?

There are various slightly different proofs, but as far as I know,
they all involve understanding how many linearly independent
vector fields you can put on a sphere, which in turn requires some
heavy-duty algebraic topology. Unfortunately I have not gone through
any of the proofs in detail - maybe I'll do it when I retire! It
seems like a nice example of how we use algebra to solve problems
in topology to solve problems in algebra. And I would like to
understand better how the octonions fit into this game: the only
n-spheres that admit n linearly indendent vector fields are for
n = 0,1,3, and 7, corresponding to the reals, complexes, quaternions
and octonions. The importance of this for string theory is fairly
well-known, but I think there are still mysteries lurking here.

Gerard Westendorp

unread,
Jan 9, 2000, 3:00:00 AM1/9/00
to
John Baez wrote:

> The simplest example is the quantum field theory of a
> noninteracting massive spin-zero particle. We have
> observables analogous to the p's and q's in this theory, so
> we get a C*-algebra of observables built from these p's and
> q's. This algebra does not depend on the mass of the particle.
> However, its representation as operators on Hilbert space
> depends in a really important way on the particle's mass.
> Different masses correspond to inequivalent representations!

This sounds like the Klein Gordon equation.
I thought I was beginning to understand this equation. But I
don't see any problem with representation. The momentum
representation can be expressed in terms of position
representation in terms of sine functions. The dispersion
relation of the sine functions depends on the mass. But what
is the problem?

Gerard

Gerard Westendorp

unread,
Jan 9, 2000, 3:00:00 AM1/9/00
to
John Baez wrote:

> The simplest example is the quantum field theory of a
> noninteracting massive spin-zero particle. We have
> observables analogous to the p's and q's in this theory, so
> we get a C*-algebra of observables built from these p's and
> q's. This algebra does not depend on the mass of the particle.
> However, its representation as operators on Hilbert space
> depends in a really important way on the particle's mass.
> Different masses correspond to inequivalent representations!

This sounds like the Klein Gordon equation.

John Baez

unread,
Jan 9, 2000, 3:00:00 AM1/9/00
to
In article <387917F3...@xs4all.nl>,
Gerard Westendorp <wes...@xs4all.nl> wrote:

>John Baez wrote:

>> The simplest example is the quantum field theory of a
>> noninteracting massive spin-zero particle. We have
>> observables analogous to the p's and q's in this theory, so
>> we get a C*-algebra of observables built from these p's and
>> q's. This algebra does not depend on the mass of the particle.
>> However, its representation as operators on Hilbert space
>> depends in a really important way on the particle's mass.
>> Different masses correspond to inequivalent representations!

>This sounds like the Klein Gordon equation.

Yes. More precisely, it's "second-quantized" version: I'm
talking about quantum field theory here, not single-particle
relativistic quantum mechanics.

>I thought I was beginning to understand this equation.

Good! You probably are!

>But what is the problem?

I'd rather not call it a "problem" - I'd rather just call it a "fact".

I was trying to express myself without jargon, but that meant
I had to be rather vague. Here's what I was really getting at:
for different values of the mass, the 2nd-quantized Klein-Gordon
equation gives inequivalent unitary representations of the canonical
commutation relations. This came as a shock to physicists weaned
on the Stone-von Neumann theorem, which says that for finitely many
degrees of freedom all strongly continuous unitary representations
of the Weyl C*-algebra are equivalent. For infinitely many degrees
of freedom this theorem no longer holds, and the above situation is
one of the simplest where you can see exactly how it fails. When
people understood this issue,, they realized where a bunch of infinities
in quantum field theory were coming from, and got better at avoiding
them.

(If the jargon I'm using above makes no sense, the cure is to read
some books on algebraic quantum field theory.)


John Baez

unread,
Jan 9, 2000, 3:00:00 AM1/9/00
to
In article <855j7e$je1$1...@news.dtc.hp.com>,
(Greg Weeks) <we...@golden.dtc.hp.com> wrote:

>John Baez (ba...@galaxy.ucr.edu) wrote:

If you have a self-adjoint Hamiltonian H and you don't like the fact
that it's unbounded, you are always free to work with the one-parameter
group exp(itH) instead: these operators are unitary hence bounded.
You can recover H from the one-parameter group exp(itH) using Stone's
theorem, so there's really no problem switching between viewpoints -
we can use whatever happens to be more convenient at the moment.

In the Heisenberg picture we think of this one-parameter group as
acting on *operators* rather than states:

A -> exp(itH) A exp(-itH)

If the operators A that you're interested in form a C*-algebra,
you want to make sure that whenever A lies in this C*-algebra, so
does A(t). Then time evolution is a "one-parameter group of
*-automorphisms" of your C*-algebra. The "one-parameter group"
part means that if we write

F(t)(A) = exp(itH) A exp(-itH)

then the maps F(t) satisfy

F(0) = 1
F(s)F(t) = F(s+t)

The "*-automorphism" part means that the maps F(t) preserve everything
in sight: addition, scalar multiplication, the product, and the *-structure.
(As a result, they also preserve the norm.)

Abstracting slightly, suppose we have a C*-algebra that we're not yet
thinking of as operators on some Hilbert space. Then we can still
describe time evolution by a one-parameter group of *-automorphisms
of this C*-algebra. We say such a one-parameter group F(t) is "inner" if
there's a one-parameter group of unitary elements U(t) in our C*-algebra
such that

F(t)(A) = U(t) A U(-t)

In this case you can think of U(t) as a perfectly fine substitute for
exp(itH), even if there's no "H" in your C*-algebra. On the other
hand, often the one-parameter group F(t) will not be inner. This is
still okay - it happens in quantum field theory a lot. Of course,
even if F(t) isn't inner, there are tricks to *make* it inner by
fattening up our C*-algebra a bit - in other words, by carefully
making up unitaries U(t) with the desired properties and throwing
them into our C*-algebra.

In general, if you want to get good at this stuff, it's good to learn
lots of tricks that let you shift back and forth between viewpoints:

1) the Hilbert space viewpoint,
2) the C*-algebra viewpoint, and
3) the von Neumann algebra viewpoint

And also, when you're working in the Hilbert space approach:

A) the "unbounded self-adjoint operators are yucky" viewpoint, and
B) the "unbounded self-adjoint operators are fine" viewpoint

I guess your question concerned jumping back and forth between
A) and B) but since this thread is entitled "C*-algebras", I wound
up also talking about jumping back and forth between 1) and 2).

Here are some of the basic tricks for jumping back and forth:

i) The GNS construction for going from a C*-algebra w. state to a
Hilbert space.
ii) The SNAG theorem, which is like the GNS construction with time
evolution thrown in (to get started, you need a C*-algebra with a
one-parameter group of automorphisms and a state that's invariant
under this group).
iii) The universal enveloping von Neumann algebra of a C*-algebra.
iv) The universal representation of a C*-algebra or von Neumann algebra.
v) Stone's theorem for going between strongly continuous unitary groups
and self-adjoint operators

The books by Bratteli and Robinson are pretty good for describing
these tricks and actually applying them to statistical mechanics.
There are also of course lots of nice tomes on C*-algebras per se,
like the ones by Kadison and Ringrose. Now I'm sort of feeling
nostalgic for the days when I thought about this stuff more!

John Baez

unread,
Jan 10, 2000, 3:00:00 AM1/10/00
to
In article <85c16k$8...@charity.ucr.edu>, John Baez <ba...@galaxy.ucr.edu> wrote:

>In the Heisenberg picture we think of this one-parameter group as
>acting on *operators* rather than states:
>
>A -> exp(itH) A exp(-itH)
>
>If the operators A that you're interested in form a C*-algebra,
>you want to make sure that whenever A lies in this C*-algebra, so
>does A(t). Then time evolution is a "one-parameter group of
>*-automorphisms" of your C*-algebra.

Umm, I probably should have said somewhere that

A(t) = exp(itH) A exp(-itH)

John Baez

unread,
Jan 10, 2000, 3:00:00 AM1/10/00
to
In article <8538uc$i...@charity.ucr.edu>, John Baez <ba...@galaxy.ucr.edu> wrote:

>The quaternions, so dear to your heart, are a C*-algebra over the real
>numbers, but not over the complex numbers.

Doug B Sweetser <uunet!world!swee...@ncar.UCAR.EDU> wrote:

>the quaternions could form a non-commutative C*-algebra over the
>complex numbers.

No. The quaternions don't even form an *algebra* over the complex
numbers, much less a C*-algebra over the complex numbers. Remember,
if A is an algebra over the commutative ring R, we have

a(ra') = r(aa')

for all a, a' in A and all r in R. There are different ways to make
the quaternions into a vector space over the complex numbers, but none
satisfy the above rule. And there are darn good reasons for requiring
this rule.

(By the way, there are also darn good reasons why people don't talk
about algebras over noncommutative rings, so you'll find that defining
an "algebra over the quaternions" is a tricky business. There may be
some interesting way to do it, but I don't know what it is.)


John Baez

unread,
Jan 10, 2000, 3:00:00 AM1/10/00
to
In article <85ae3e$gb3$1...@nnrp1.deja.com>,
bj flanagan <Word...@Rocketmail.com> wrote:

>John Baez wrote:

>> In quantum mechanics we often start by taking classical
>> observables and writing down some formulas which say that
>> actually these observables don't commute.

>bj: What counts as an 'observable'?

On the philosophical side I have nothing interesting to
say about this question - at least not in this forum.

On the mathematical side any real-valued or complex-valued
quantity will do.

On the practical side it's worth noting that while there
is in principle a vast choice in which quantities we use
as our observables in the above game, it's almost always
best to choose observables that satisfy simple Poisson
bracket relations - because these are the easiest to "turn
into commutators".

And the nicest, simplest bracket relations of all are
those saying the quantities in question form a finite-
dimensional Lie algebra. When you have this going for you,
you can often apply the massive sledgehammer of Lie theory
and crush the problem in one blow. That's why people talk
so much about "canonical commutation relations" (corresponding
to the Heisenberg Lie algebra) and "particles of spin j and
mass m" (corresponding to the Poincare Lie algebra). But
of course there's more to this than just convenience: what
we're seeing is the all-important role of *symmetry* in
understanding the relation between quantum and classical.
Without symmetry to help us, quantization is fraught with
peril.

Anyway, I digress, but I wanted to bend your question in
a direction I find more interesting: not "what counts as
an observable?" but "what are useful choices of observables?"
The former question may seem more important, but what
physicists *really* want to know about is the latter, so
they love to trade practical advice and tips about how to pick
good observables (my remarks above being just the tip of the
iceberg).


Paul Colby

unread,
Jan 10, 2000, 3:00:00 AM1/10/00
to

John Baez <ba...@galaxy.ucr.edu> wrote in message
news:859ja5$4...@charity.ucr.edu...

> Okay, here's some stuff about the importance of C*-algebras
> in physics. I'll carefully avoid any sort of mathematical
> details and focus on the basic physical ideas. Everything
> will be nonrigorous, handwavy, and vague.
>

<cut>

Thanks, this is very helpful. I recently purchased "K-theory and
C*-Algebras" by Wegge-Olsen with the hope of getting an introduction to
C*-algebras. While the tone of this book is good it starts assuming too
much is known about C*-algebras. Could you recommend a simpler intro to the
C*-Algebras? I also probably don't know enough about Banach Algebras
either.

Thanks

--
Regards
Paul Colby
Paul....@worldnet.att.net


Toby Bartels

unread,
Jan 10, 2000, 3:00:00 AM1/10/00
to
John Baez <ba...@galaxy.ucr.edu> wrote in small part:

>1) The free C*-algebra on a unitary. One generator U together with
>the relations U U* = U* U = 1.

OK, this one is commutative, so it's
continuous complex functions on some compact Hausdorff space X.
We should be able to get all polynomials in U.
This is infinitely many degrees of freedom,
so we know X is not a finite space.
But a C* algebra is also required to be complete metrically,
so we need not just polynomials all nice functions of 1 variable.
Hence, X should be a continuum, like R.
But R is not compact! Try T (the circle) instead.
Claim: (1) is the algebra C(T) of continuous complex functions on T.
I must identify U; U must be a nonconstant function to be nontrivial.
Interpret T as the unit circle in the complex plane,
and let U be the inclusion function into C.
Then U* = U^{-1} and |U| = 1; good.
I won't work out the proof until John fails to say I'm wrong.
(We can also say (1) is the space of
continuous complex valued functions on R of period 2 pi;
then U is the cos + i sin function.)


-- Toby
to...@ugcs.caltech.edu


Joseph Godfrey

unread,
Jan 11, 2000, 3:00:00 AM1/11/00
to
The original proof of this result, I believe, is due to Frobenius in 1877
(Frobenius' Theorem). There is a proof in Herstein's book, Topic in Algebra
(1975) that avoids the "heavy guns." Frobenius' proof still relies on ring
theory which, of course, has deep connections with with all the areas
mentioned: cohomology, vector fields on spheres, etc. But the actual proof
is fairly "combinatoric" in character: one basically assumes a division ring
over a field and deduces the possible algebras for the fields.

I am not aware of any similar treatment for the octonions. I believe there
is a relationship between E(8) and octonions -- I don't remember exactly. I
seem to remember it involves introducing octonion coordinates on a
projective space of some dimension to get the relevant vector space. And
projective spaces, of course, have a "representation" as spheres in a space
of higher dimension. I believe all the exceptional groups have this
character, relying variously on quaternion and octonion coordinates
introduced on projective spaces of various dimensions.

I share the conviction that there are still mysteries lurking here.

Regards,

Joe Godfrey

Toby Bartels

unread,
Jan 11, 2000, 3:00:00 AM1/11/00
to
Doug B Sweetser <uunet!world!swee...@ncar.UCAR.EDU> wrote almost at first:

>What I will propose in this post is that quaternions form a
>non-commutative C*-algebra over the field of quaternions. This is
>slightly different from the above definition.

You never actually defined a C* algebra over the quaternions,
but I think your post gives good reason to suspect
that such a definition is possible
and that the quaternions themselves would satisfy it.

>Since complex numbers
>are a subalgebra of quaternions, if demonstrated, the quaternions
>could form a non-commutative C*-algebra over the complex numbers.

That would be a *possibility*, but ...

>I think your comments applied to commutative C*-algebras, which is
>probably what most people work with.

Sadly, no.
After all, we're discussing C* algebras' use in quantum physics.
Commutative C* algebras only give you classical physics.


-- Toby
to...@ugcs.caltech.edu


(Greg Weeks)

unread,
Jan 11, 2000, 3:00:00 AM1/11/00
to
John Baez (ba...@galaxy.ucr.edu) wrote:
: >Can you similarly express the equations of motion using bounded operators?
...
: A -> exp(itH) A exp(-itH)

Oh, right. I guess I shied away from this because, in free field theory at
least, viewing the "equations of motion" to be the Klein-Gordon equation
seems less singular than the formal expression for the Hamiltonian. (No
multiplication of unsmeared operators at points.)


: In this case you can think of U(t) as a perfectly fine substitute for


: exp(itH), even if there's no "H" in your C*-algebra. On the other
: hand, often the one-parameter group F(t) will not be inner. This is
: still okay - it happens in quantum field theory a lot.

I'm confused. In quantum field theory, I'm used to having faithful
representations of the algebra in which time evolution is via exp(-itH).
Doesn't this imply inner time evolution in the algebra?


Greg


ar...@csstupc28.cs.nyu.edu

unread,
Jan 11, 2000, 3:00:00 AM1/11/00
to
Hi,
I am reading a book by Gerard Murphy: "C* Algebras and Operator
Theory" which seems a lot more accessible to me. (I don't have too
formal a background in analysis beyond bits and pieces of hilbert
spaces and banach spaces.) I think it does cover K theory and Bott
periodicity towards the end of the book.

Hope this helps,
Archi

John Baez

unread,
Jan 11, 2000, 3:00:00 AM1/11/00
to
In article <85clla$n...@gap.cco.caltech.edu>,
Toby Bartels <to...@ugcs.caltech.edu> wrote:

>John Baez <ba...@galaxy.ucr.edu> wrote in small part:

>>1) The free C*-algebra on a unitary. One generator U together with
>>the relations U U* = U* U = 1.

>OK, this one is commutative, so it's
>continuous complex functions on some compact Hausdorff space X.

Right - the Gelfand-Naimark theorem kicks in and tells us this.

>We should be able to get all polynomials in U.
>This is infinitely many degrees of freedom,
>so we know X is not a finite space.
>But a C* algebra is also required to be complete metrically,
>so we need not just polynomials all nice functions of 1 variable.
>Hence, X should be a continuum, like R.
>But R is not compact! Try T (the circle) instead.

Good!

>Claim: (1) is the algebra C(T) of continuous complex functions on T.
>I must identify U; U must be a nonconstant function to be nontrivial.
>Interpret T as the unit circle in the complex plane,
>and let U be the inclusion function into C.
>Then U* = U^{-1} and |U| = 1; good.
>I won't work out the proof until John fails to say I'm wrong.

You won't work it out until I fail to say you're wrong???
First it's your wonderful phrase "fall out of disfavor", and
now this. Double negatives are okay, but triple negatives
turn my brain to mush. I can't even tell if you meant what
you said or not!

Anyway, I won't refrain from failing to say you're wrong,
because you're RIGHT. And I'll even sketch the proof.

Suppose u is any unitary in any C*-algebra A. Its spectrum
lies on the unit circle, so given any continuous function
f: T -> C we get an element f(u) of A by the functional
calculus. This gives us a *-homomorphism from C(T) to A
sending f to f(u). And this *-homomorphism maps your U
to the element u. And it's the only *-homomorphism with
this property. Voila! We conclude C(T) is the free
C*-algebra on the unitary U.

(Well, we need to check some details, but you get the idea.

Moral: since the spectrum of every unitary operator lives
in the unit circle, the free C*-algebra on a unitary is
just the continuous functions on the unit circle.

(Similarly the "free C*-algebra on a self-adjoint" is the
continuous functions on the real line, but this is slightly
harder to make precise, for reasons we've already discussed
here.)

So now we come to the much trickier and more interesting
problem: what's the free C*-algebra on an isometry? I'll
give you a mystical-sounding hint: it's the "noncommutative
unit disc".

John Baez

unread,
Jan 11, 2000, 3:00:00 AM1/11/00
to
In article <85agqg$o7g$1...@bgtnsc03.worldnet.att.net>,
Paul Colby <Paul....@worldnet.att.net> wrote:

>Thanks, this is very helpful.

Great!

>I recently purchased "K-theory and
>C*-Algebras" by Wegge-Olsen with the hope of getting an introduction to
>C*-algebras. While the tone of this book is good it starts assuming too
>much is known about C*-algebras. Could you recommend a simpler intro to the
>C*-Algebras?

There are lots of books on C*-algebras. I think this is a friendly
sort of introduction:

An invitation to C*-algebras / William Arveson. New York :
Springer-Verlag, 1976.

But personally I learned the basic from this one, which is a bit
more systematic and thorough:

Theory of operator algebras I / Masamichi Takesaki. New York :
Springer-Verlag, c1979.

If you want to really become an *expert*, try this - it also has
tons of exercises:

Fundamentals of the theory of operator algebras / Richard V. Kadison,
John R. Ringrose. New York : Academic Press, 1983-1992 (four volumes).

Finally, if you want to see how C*-algebras show up in physics,
this one is really nice:

Algebraic methods in statistical mechanics and quantum field theory /
Gerard G. Emch. New York : Wiley-Interscience, 1972.

>I also probably don't know enough about Banach Algebras either.

I don't either.

The theory of Banach algebras has a surprisingly different flavor
from the theory of C*-algebras. They're similar at first, but
the Gelfand-Naimark-Segal theorem says that every C*-algebra is
isomorphic to an algebra of operators on some Hilbert space, and
this just ain't true for Banach algebras, so they go off in different
directions after that. C*-algebras have "quantum mechanics" written
all over them; Banach algebras don't.

Also, the Gelfand-Naimark theorem says that commutative C*-algebras
are just algebras of continuous functions on compact Hausdorff spaces -
nice and easy - while commutative Banach algebras can have a lot more
personality. A lot of interesting examples of commutative Banach
algebras are algebras of analytic functions, so the theory of commutative
Banach algebras is sort of an abstraction of certain hunks of complex
analysis. There's this "Shilov boundary" idea that generalizes the
maximum modulus principle in complex analysis....

When I was a grad student I worked with the dude who invented
C*-algebras, so it's perhaps fitting that when I was a postdoc
I shared an office with a retired professor who was an expert on
Banach algebras. I have a copy of his book:

General theory of Banach algebras / Charles Earl Rickart.
Princeton, N.J. : Van Nostrand, 1960.

I think it's sort of outdated - it covers all sorts of stuff that
most people don't talk about much these days, and probably leaves
out stuff people *do* talk about. I bet someone could suggest
newer ones. One nice thing about it, though, is that it covers
the theory of H*-algebras (roughly, *-algebras that are Hilbert
spaces). Believe it or not, I've needed this now and then in my
work.

Charles Francis

unread,
Jan 12, 2000, 3:00:00 AM1/12/00
to
In article <85car6$j...@gap.cco.caltech.edu>, thus spake Toby Bartels
<to...@ugcs.caltech.edu>
>Charles Francis <cha...@clef.demon.co.uk> wrote at last:

>
>>If you consider real measurement of momentum, it is bounded by the
>>dimensions of the measuring apparatus and the time taken to measure it.
>
>I know this is off topic for this branch of the thread,
>but the Subject header still says "C*-algebras", so ...:
>
>This is why I don't consider it a problem, philosophically speaking,
>that the unbounded momentum operator doesn't belong to any C* algebra.
>The observable corresponding to any *actual* momentum measuring experiment
>is bounded.

Would not the cut-offs used in renormalisation be so much more
approachable if they were treated from this angle, and defined to be
finite on physical grounds, in such a way that invariance is restored in
the limit? By formulating the theory in a finite manner, it cannot
diverge. And by insisting on an interaction density[1] which does not
allow two interactions at the same point in time for any particle, there
is no infinite renormalisation problem, provided that limits are taken
in the correct order.


[1] The interaction density is formally identical to the interaction
term in the lagrangian, but dealt with in a finite space of particles it
can be interpreted as describing the potential for actual interactions
between real particles, not just as a quantised classical potential.

Walter Kunhardt

unread,
Jan 12, 2000, 3:00:00 AM1/12/00
to
On 11 Jan 2000, (Greg Weeks) wrote:

> So, as I've said before, I have to disagree with Haag when he says that all
> the physics can be found in the net of local C*-algebras (ie, the algebras
> of observables restricted to various finite regions of space-time). These
> algebras suffice for collision theory, but there is more to life than
> collisions.

Is there, really?
At least as high energy physics is concerned, are not all experiments
which one want to describe by a (quantum field) theory collision
experiments?

By the way, algebraic quantum field theory can describe more than
collision theory:
- spin & statistics
- CPT symmetry
- the existence of a global gauge group can be proved
- small-scale asymptotic particles (e.g. quarks) can be derived
- AdS-CFT correspondence
- ...

For a nice summary of the sort of questions raised and for
the state of affairs in algebraic quantum field theory, have look at

- "Current trends in axiomatic quantum field theory"
Detlev Buchholz
http://arXiv.org/abs/hep-th/9811233

- "The Quest for Understanding in Relativistic Quantum Physics"
Detlev Buchholz, Rudolf Haag
http://arXiv.org/abs/hep-th/9910243

For the AdS-CFT stuff, see

- "A Proof of the AdS-CFT Correspondence"
K.-H. Rehren
http://arXiv.org/abs/hep-th/9910074

- "Algebraic Holography"
K.-H. Rehren
http://arXiv.org/abs/hep-th/9905179


The point of view of local quantum physics (which Greg criticises in his
post) is explained in the recent paper by Haag himself,

- Questions in quantum physics: a personal view
Rudolf Haag
http://arXiv.org/abs/hep-th/0001006

I think the moderator will let go through this sort of "advertisement"
on s.p.r. :-)

Walter Kunhardt
(Inst.f.Theor.Phys., Goettingen University)

Doug B Sweetser

unread,
Jan 12, 2000, 3:00:00 AM1/12/00
to
Hello John and Toby:

I think may be getting a sense of the roles played by complex numbers
and C*-algebras. I will then try to clarify my definition of a
non-commutative C*-algebra of quaternion operators over the field of
quaternions, which omits any criteria for being commutative.

Take all the machinery of quantum mechanics, but instead of complex
numbers, use real numbers. This will result in classical mechanics.
The complex numbers are required for the wonderful interference
effects seen in the 2-slit experiment.

So to do any fun stuff, complex numbers are needed. There is a
problem though: there are operators that do not commute. A simple
complex operator will not do the job. That is where the C*-algebras
come in. The * in C*-algebra is for the involution operator. It is
the properties of the involution operator combined with having a norm
which satisfies ||x* x|| = ||x||^2 that defines a C*-algebra.

There is a clear division of labor: use the C*-algebra for the
operators over a division ring of complex functions. That machinery
makes quantum mechanics work.

Hopefully that summary is fairly accurate.

>From now on, I will be trying to precisely define what I mean by my
phrase "non-commutative C*-algebra over a non-commutative field". My
Encyclopedic Dictionary of Mathematics 2 is open to section 36 F and
G. The only criteria I will modify is f2, which is a rule for
involutions on a commutative ring. Since quaternions are not
commutative, it is reasonable to rewrite this criteria in a way
consistent with a non-commutative ring.

A complex number z has an involution, z*. z and z* are sufficient to
describe the degrees of freedom in a complex number. In my previous
post, I defined for a quaternion q three types of involutions, q*, q*1
and q*2. Briefly, q* flips the signs of all but the scalar, q*1 flips
all signs but the first component of the 3-vector, and q*2 flips the
signs of all but the second component of the 3-vector. q, q*, q*1,
and q*2 are sufficient to describe the degrees of freedom in a
quaternion.

The first question is whether quaternions form a Banach algebra. This
requires that the Schwarz inequality hold

||xy|| <= ||x||.||y||

There is absolutely no difference between the proof of this for
complex numbers and quaternions. I hope my readers will accept that
quaternions can form a Banach algebra.

Quoting EDM2 with modifications...
"An involution in a Banach algebra R is an operation x->x* that
satisfies:

(1) (x + y)* = x* + y*

(x + y)*1 = x*1 + y*1
(x + y)*2 = x*2 + y*2

(2) (Lx)* = x* L*
(Lx)*1 = -x*1 L*1
(Lx)*2 = -x*2 L*2

(3) (yx)* = x* y* [note rule 2 = 3]
(yx)*1 = -x*1 y*1
(yx)*2 = -x*2 y*2 [- signs, an indication of spin??]

(4) (x*)* = x
(x*1)*1 = x
(x*2)*2 = x

[Note that for complex numbers, there is only one involution to deal
with. Quaternions must have 3 to cover the other two degrees of
freedom.] A Banach algebra with [these] involution[s] is called a
Banach *-algebra." In the context of quaternions, it will be
understood that there are necessarily 3 linearly-independent
involutions (when I started doing this, I tried to use q, q*1, q*2,
q*3, but that didn't work).

"A Banach *-algebra A satisfying ||x x*|| = ||x||^2 for all x element
of A is called a C*-algebra." Three norms must be defined for the
three involutions.

The conjugate norm
||q|| = q* q = (t, -x, -y, -z)(t, x, y, z)
= (t^2 + x^2 + y^2 + z^2, 0, 0, 0)

The first conjugate norm
||q|| = (q*1)* q*1 = (-t, -x, y, z)(-t, x, -y, -z)
= (t^2 + x^2 + y^2 + z^2, 0, 0, 0)

The second conjugate norm
||q|| = (q*2)* q*2 = (-t, x, -y, z)(-t, -x, y, -z)
= (t^2 + x^2 + y^2 + z^2, 0, 0, 0)

Since the norm evaluates to a real number, it commutes with any
quaternion. In addition, changing the order of the quaternion
multiplication in the norm does not change the value of the norm.
That is critical for the following proofs.

Look at the standard conjugate norm
||q||^2 = (q* q)^2 = q* q q* q = (q q*)* q* q = (q* q)* q* q =||q* q||

Look at the first conjugate norm

||q||^2 = ((q*1)* q*1)^2 = (q*1)* q*1 (q*1)* q*1
= (q*1)* (q*1)* q*1 q*1
= (q*1 q*1)* q*1 q*1
= ||q*1 q||

The same will hold true for the second conjugate.

Hopefully this pedantic post explains exactly what I mean by a
non-commutative C*-algebra over a non-commutative field. Let's think
about the broader implications. One is that there is no difference
between how the field behaves and how the operators on that field
behave. Identical rules apply to both, and those rules are the ones
discussed above. A non-trivial problem in gauge theory is that the
equations do not necessarily have an inverse, so creating the
propagator for the field is not always possible. That issue is
resolved if the fields, the functions, and the operators all are part
of the same division algebra. Just like the complex numbers - when
handled correctly using automorphisms - is more subtle than the real
numbers, automorphic quaternion functions are more subtle than the
complex case. One concrete example of this are the differences in the
involutions. Two other linearly independent involutions exist. These
involutions differ only slightly from the standard one, tossing in a
minus sign precisely like spin would.


doug swee...@world.com
quaternions.com

There is a darn good reason to try:
logical consistency between
numbers, fields, functions, operators and propagators.


John Baez

unread,
Jan 13, 2000, 3:00:00 AM1/13/00
to
In article <85flr8$1gh$1...@news.dtc.hp.com>,
(Greg Weeks) <we...@golden.dtc.hp.com> wrote:

>John Baez (ba...@galaxy.ucr.edu) wrote:

>>... suppose we have a C*-algebra that we're not yet


>>thinking of as operators on some Hilbert space. Then we can still
>>describe time evolution by a one-parameter group of *-automorphisms
>>of this C*-algebra. We say such a one-parameter group F(t) is "inner" if
>>there's a one-parameter group of unitary elements U(t) in our C*-algebra
>>such that
>>

>>F(t)(a) = U(t) a U(-t)


>>
>>On the other hand, often the one-parameter group F(t) will not be inner.

>>This is still okay - it happens in quantum field theory a lot. Of course,


>>even if F(t) isn't inner, there are tricks to *make* it inner by

>>fattening up our C*-algebra a bit [....]

>I'm confused. In quantum field theory, I'm used to having faithful
>representations of the algebra in which time evolution is via exp(-itH).
>Doesn't this imply inner time evolution in the algebra?

The big question is: which algebra? Like I said, you can always
fatten up your C*-algebra so time evolution *is* inner. But it's
not always good to do this.

Suppose for example that A is the Weyl C*-algebra describing the
canonical commutation relations for a scalar field on Minkowski
spacetime. Suppose that K is the Fock space for a free scalar field
of mass m. Then we get a faithful representation of A on K. This
means we can think of A as a subalgebra of L(K), the C*-algebra of
all bounded operators on K. So let's do that.

Now let H be the usual Hamiltonian for a free scalar field of mass m.
The operators exp(-itH) are not in the algebra A (except, ahem, when
t = 0). But they do act as automorphisms of A: in other words, if
a lies in A, so does

F(t)(a) = exp(itH) a exp(-itH)

So F(t) acts as a 1-parameter group of outer automorphisms of A -
where "outer" just means "not inner".

However, exp(-itH) is certainly in L(K), so if we define time
evolution on all of L(K) by the above formula, we see that it
acts as a 1-parameter group of inner automorphisms of L(K).

In short, by fattening up our C*-algebra we have made time
evolution inner. There are some very general theorems to this
effect - the above example is just a nice simple example.

Why not always make time evolution inner? Doing so basically
amounts to making "the total energy of the universe" an observable
in our C*-algebra. Sometimes this is a good idea, but sometimes
it's not!

(Well, the Hamiltonian is not usually bounded, so it's not really
going to be *in* our C*-algebra: technically, the question is whether
it's "affiliated to our C*-algebra", meaning that all bounded functions
of it are in our C*-algebra. This concept makes sense whenever we
have a "concrete" C*-algebra, one that is a subalgebra of the bounded
operators on a Hilbert space, together with an unbounded operator
on this Hilbert space.)

John Baez

unread,
Jan 13, 2000, 3:00:00 AM1/13/00
to
In article <85fki1$19a$1...@news.dtc.hp.com>,
(Greg Weeks) <we...@golden.dtc.hp.com> wrote:

>John Baez (ba...@galaxy.ucr.edu) wrote:

>: >bj: What counts as an 'observable'?

>: On the philosophical side [...]
>
>: On the mathematical side [...]

>"philosophical" vs "mathematical". What about "physical"?

Hey, no fair! You left out my discussion of the "practical" side
of things, which was the only interesting part of the post. So now
it'll be hard for me to resist the urge to write a somewhat argumentative
reply....

>My point here
>is that there is no physics in a C*-algebra alone. To obtain physics, you
>have to just *know* which elements are observable and how to observe them.
>The algebra doesn't tell you. It is not self-interpreting.

This is the same for any mathematical theory of physics, or indeed,
any formal axiomatized theory whatsoever. For example, even in
classical mechanics, the mathematical formalism by itself does not
tell you how to apply that formalism to the real world. Someone
has to come along and tell you "(x,y,z) stands for the position of
this apple, which you measure by taking this ruler and.... "

So, while important and very complicated, I'd call this part of the
"philosophical" aspect of "what counts as an observable" - the general,
business of what it means to observe something, what quantities we can
observe, how we actually observe them, how we connect this up to our
mathematical formalism, and so on. I was more in the mood for talking
about stuff that was specific to C*-algebras.

>For example, suppose we had an algebra generated by position and momentum,
>but we thought position should be measured like momentum and vice versa.
>Then our theory would give lousy experimental results, but the theory alone
>would not tell us how to remedy this.

Right. Of course, this applies to classical mechanics too.

>Similarly, consider two C*-algebras, where one is obtained from the other
>by the replacement i --> -i. The two algebras are physically
>indistinguishable, but the quantity i*(Q*P - P*Q) is different in the two
>algebras. In a physical sense, then, that quantity is unobservable, even
>though mathematically it is either 1 or -1 (depending on your algebra).

The classical analogue of this is switching the sign of the Poisson
brackets and noting that if {P,Q} = 1 with the original Poisson brackets,
now it's -1. This kind of business is closely related to time reversal,
by the way.

>So, as I've said before, I have to disagree with Haag when he says that all
>the physics can be found in the net of local C*-algebras (ie, the algebras
>of observables restricted to various finite regions of space-time).

I think statements like "all of physics can be found in [some
mathematical structure]" are always problematic, for the reason
I mentioned above: mathematical formalisms are never self-interpreting.

When people make such statements, they are usually engaged in a kind
of salesmanship, so one should read what they say as advertising
rather than statement of fact.


(Greg Weeks)

unread,
Jan 13, 2000, 3:00:00 AM1/13/00
to
Walter Kunhardt (kunh...@Theorie.Physik.UNI-Goettingen.DE) wrote:
: Is there, really?
: At least as high energy physics is concerned, are not all experiments
: which one want to describe by a (quantum field) theory collision
: experiments?

Yes, but there is *much* more to physics than high energy physics. There
isn't one set of physical laws for accelerators and another for everything
else. Quantum field theory needs to account for all the physics that came
before it; and I don't believe that the net of algebras of observables can
do this if you have no a priori idea of what the observables mean.

[Unfortunately, I don't know how to derive pre-QFT physics from QFT even
*with* an a priori idea of what the observables mean. In the rather
broadly titled book "Quantum Physics", Glimm and Jaffe imply that the
derivation is too elementary to bother with. But even with a PhD in
particle physics (from long ago) and all of Lawrence Yaffe's patient
explanations (less long ago), I still can't see it.]

: The point of view of local quantum physics (which Greg criticises in his


: post) is explained in the recent paper by Haag himself,

Oh, no, the only thing I criticise is the assertion that *all* the relevant
physics is in the net of algebras (without any need for further information
about the physical meaning of the observables). Once again: Suppose you
believe that Q is momentum and P is position. Isn't that a different
theory from the other way around?


Greg


Walter Kunhardt

unread,
Jan 14, 2000, 3:00:00 AM1/14/00
to
On 13 Jan 2000, (Greg Weeks) wrote:

> Walter Kunhardt (kunh...@Theorie.Physik.UNI-Goettingen.DE) wrote:

> : At least as high energy physics is concerned, are not all experiments
> : which one want to describe by a (quantum field) theory collision
> : experiments?

> Yes, but there is *much* more to physics than high energy physics.

Of course.

> There isn't one set of physical laws for accelerators and another
> for everything else.

Well, in a sense I think, there is, actually. Let me put it like that:

Of course there is not a "physics" valid for accelerators and "another
one" valid for the solar system, say. But luckily, we have _theories_
which work quite well for certain limited classes of phenomena.

In the (rather trivial) example at hand: The standard model describes
elementary particle physics, while there's General Relativity (or
Newtonian gravity) for the motion of planets. Many of us feel that it
would be nice to merge those two into one and/or to have a "theory of
everything". But as long at the latter is not available
(and I've heard that that there are still some small difficulties :-) ),
I believe that it is legitimate to try to understand the physics of
accelerators without taking into account effects of quantum gravity.

Therefore: Relativistic quantum field theories (plural! -- i.e.: Rel. QFT
is not a single theory, but the framework for different models, as, e.g.,
QED, QCD, Standard model, ...) on Minkowski spacetime
seems to have some useful application to the description of what it
observed in accelerators. (I think "description of what is observed"
should not be confused with "understanding of what really happens".)

> Quantum field theory needs to account for all the physics that came
> before it; and I don't believe that the net of algebras of observables can
> do this if you have no a priori idea of what the observables mean.
>
> [Unfortunately, I don't know how to derive pre-QFT physics from QFT even
> *with* an a priori idea of what the observables mean.

Nor do I, in a technical sense, but I think the "net of anonymous
observables" approach is (in principle) _not less_ powerful in that
respect as "usual" Lagrangean QFT.

In more precise terms and the concrete example of QED:
In the Haag-Kastler approach to QFT, we believe that there is a local net
of observables O |---> A_{QED}(O) which carries the same information
as the QED Lagrangean L given in terms of Psi(x), A(x).

There is the following evidence for this belief:
Quantum fields (that is, operator-valued distributions) yield the same
S-matrix if they generate the sane nets of observables.
(In our lingo, these fields belong to the same "Borchers class". Have a
look at Haag's book , where this term and the technical details are
explained.)

> : The point of view of local quantum physics (which Greg criticises in his
> : post) is explained in the recent paper by Haag himself,

> Oh, no, the only thing I criticise is the assertion that *all* the relevant
> physics is in the net of algebras (without any need for further information
> about the physical meaning of the observables). Once again: Suppose you
> believe that Q is momentum and P is position. Isn't that a different
> theory from the other way around?

Sure it is, but there is no danger of confusion (at least in this very
example, but you will certainly look for another one):
Position does not appear as an operator, but as a label for the
localisation of the observables. Momentum, in contrast, is an operator
since it comes as the generator of the unitary group of translations.

Momentum, actually, is a very nice example of an operator which is
"automatically" present. In fact, there is a theorem (due to Borchers and
Buchholz, cf, Comm. Math. Phys. _97_(1985) pp.169-185) which says that if
there exists a 4-momentum operator which is, in a certain sense, a limit
of local observables and if the energy is bounded below, then this
4-momentum operator is unique (all technical details omitted...).

Walter.

John Baez

unread,
Jan 14, 2000, 3:00:00 AM1/14/00
to
I was talking about making a 1-parameter group of automorphisms of
a C*-algebra inner by "fattening it up a bit", and I gave the following
example....

>Now let H be the usual Hamiltonian for a free scalar field of mass m.
>The operators exp(-itH) are not in the algebra A (except, ahem, when
>t = 0). But they do act as automorphisms of A: in other words, if
>a lies in A, so does
>
>F(t)(a) = exp(itH) a exp(-itH)
>
>So F(t) acts as a 1-parameter group of outer automorphisms of A -
>where "outer" just means "not inner".
>
>However, exp(-itH) is certainly in L(K), so if we define time
>evolution on all of L(K) by the above formula, we see that it
>acts as a 1-parameter group of inner automorphisms of L(K).

We don't really need to replace A by all of L(K) to make our
1-parameter group inner. More economically, we can just use
the smallest C*-algebra containing A and all the operators
exp(itH). Concretely speaking, this C*-algebra contains:

1) all iterated products of operators in A and operators of
the form exp(itH),

2) all linear combinations of the operators in 1),

3) all limits (in the norm topology) of operators in 2).

Basically what this amounts to is making the Hamiltonian into
an observable by *decreeing* that it's affiliated to your
C*-algebra of observables.

And of course this works far more generally than in the example
above. It works whenever we have a faithful representation of
a C*-algebra in which the 1-parameter time evolution group F(t)
can be written as

F(t)(a) = exp(itH) a exp(-itH)

The SNAG theorem gives conditions under which this happens...
and these conditions are general enough to apply to all the
quantum field theories one normally considers (on Minkowski
spacetime). So in this kind of situation, we can always
make time evolution inner by fattening up our C*-algebra a
bit.

Toby Bartels

unread,
Jan 14, 2000, 3:00:00 AM1/14/00
to
John Baez <ba...@galaxy.ucr.edu> wrote at last:

>So now we come to the much trickier and more interesting
>problem: what's the free C*-algebra on an isometry? I'll
>give you a mystical-sounding hint: it's the "noncommutative
>unit disc".

Yeah, I know this one's next.

At first I thought it was the noncommutative circle,
but that doesn't make a lot of sense since the circle's 1D.
Now, the commutative unit disc is the continuous complex functions f
of 2 variables, r and t, where r is in [0,1] and t is on the circle,
such that f(0,t) is the same no matter what t is.
The noncommutative unit disc is all those functions where now rt != tr.
If I knew noncommutative geometry, maybe this would make sense to me.
I suppose I could calculate in such an algebra,
just by not applying any rules I couldn't justify
(which is pretty much how I do everything in math),
but you want me to give it a familiar description.
I don't think I'm going to get that from the noncommutative disc.

Now, you mentioned before a function from this algebra A to C(T).
We have this because C(T)'s unitary is also an isometry.
So, if A is generated by the isometry I
and C(T) is generated by the unitary U,
then F(I) = U and F(II*) = 1, although II* != 1.
F([I,I*]) = 0 even though [I,I*] != 0,
so F has a nontrivial kernel, in fact a huge kernel.
In fact, ker F is about the whole problem, so this isn't getting me anywhere.

Perhaps I should try to describe I in terms of r and t.
I should reduce to just t in the case r = 1.
(This is so F(I) = U = t; in general, F(f) = f(1,.).)
The simplest way to do this is I = t.
But t* = 1/t, so t is unitary, so this is wrong.
While I'm computing t*, I should say r* = r, so r is Hermitean.
I'm not sure these conclusions are valid
(could I have said they were normal just because they're complex valued?),
but since I've never read anything on noncommutative geometry
I believe I'm entitled to make stuff up if I think it makes sense.
OK, so I is a combination of r and t, maybe rt.
This has the value of being the identity function on the commutative disc.
Also, note that rt = t when r = 1, so F(rt) = U.
Then II* = rtt*r* = rt(1/t)r = r^2 != 1.
But I*I = t*r*rt = (1/t)r^2t != 1 either.
Of course, there are other options than rt
which will be the identity function on the commutative disc.
But tr is just as bad, and (sqrt r)t(sqrt r) and (rt + tr)/2 are worse.
But perhaps an identity function is wrong,
since its modulus isn't always 1 on the commutative disc.
This suggests something like rt(1/r), but that doesn't work either.
Should I try to find a function that reduces to t on the commutative disc?
And would finding it help me understand better?

I don't know.


-- Toby
to...@ugcs.caltech.edu


Toby Bartels

unread,
Jan 14, 2000, 3:00:00 AM1/14/00
to
John Baez <ba...@galaxy.ucr.edu> wrote in part:

>a(ra') = r(aa')

>(By the way, there are also darn good reasons why people don't talk


>about algebras over noncommutative rings, so you'll find that defining
>an "algebra over the quaternions" is a tricky business. There may be
>some interesting way to do it, but I don't know what it is.)

In a way, it violates the spirit of a noncommutative ring
to insist upon the above rule. I know, I know,
from a categorical perspective, multiplication should be bilinear.
But that doesn't mean we can't consider cases where it's not.

Define a C* algebra A over the quaternions H as follows:
First, A should be a left and right vector space over H,
where both vector space structures use the same addition operation.
A should also be a ring, under that addition operation.
Then extend associativity by h(ab) = (ha)b, (ah)b = a(hb), and (ab)h = a(bh),
where (as always here) h in H and a,b in A.
Let * be an involution on A, with (a + b)* = a* + b*,
(ab)* = b*a*, (ha)* = a*h*, and (ah)* = h*a*,
where h* for h in H is the standard conjugation.
Let |.| be a function from A to the nonnegative reals.
Insist |a + b| <= |a| + |b| and the same for multiplication.
If |a| = 0 implies a = 0, d(a,b) := |a - b| is a metric.
Insist that this is the case and that the metric is complete.
Require |ha| = |h||a|, |a*| = |a| and |a*a| = |a|^2.
I don't think this is always included in C* algebras,
but let's say there is a multiplicative identity 1,
with h1 = 1h and |1| = 1. I think that's all.
Then H itself is a C* algebra over the quaternions.


-- Toby
to...@ugcs.caltech.edu


John Baez

unread,
Jan 15, 2000, 3:00:00 AM1/15/00
to
In article <85mbu0$4...@gap.cco.caltech.edu>,
Toby Bartels <to...@ugcs.caltech.edu> wrote:

>John Baez <ba...@galaxy.ucr.edu> wrote at last:

>>So now we come to the much trickier and more interesting
>>problem: what's the free C*-algebra on an isometry? I'll
>>give you a mystical-sounding hint: it's the "noncommutative
>>unit disc".

>Yeah, I know this one's next.

>At first I thought it was the noncommutative circle,
>but that doesn't make a lot of sense since the circle's 1D.

Right.

Just to make sure our TV audience out there understands
the notation in this game:

A is the free C*-algebra on an isometry - and we're calling the
isometry I.

C(T) is the algebra of continuous functions on the circle T.
This is also the free C*-algebra on a unitary U, which happens
to be the function exp(i theta), where theta is the angle going
around the circle.

F: A -> C(T) is the unique *-homomorphism sending I to U.

Anyway, as you note, the kernel of F is huge: it's the ideal
generated by I I*, since we have

U* U = U U* = 1

in C(T) but only

I* I = 1

in A. So in some vague sense we expect A to be "functions on a
noncommutative space with higher dimension than the circle".

Let's not worry too much about what this means exactly - we're
not really gonna need the technical theory of "dimension" or
any other power tools from noncommutative geometry to solve this
problem; some basic insight and creativity should do the job.

>Now, the commutative unit disc is the continuous complex functions f
>of 2 variables, r and t, where r is in [0,1] and t is on the circle,
>such that f(0,t) is the same no matter what t is.
>The noncommutative unit disc is all those functions where now rt != tr.
>If I knew noncommutative geometry, maybe this would make sense to me.

Hmm, I hadn't thought of trying to solve this problem this way.
You're taking this "noncommutative disc" stuff really literally.
Well, let's try it. Can we define r and t as elements of A?
I guess we want

r = I I*

because this is exactly what gets sent to 1 by F, corresponding
to the fact that the unit circle is the part of the unit disc
where r = 1. (I.e., the algebra of functions on the unit circle
is the algebra of the functions on the ordinary unit disc modulo
the ideal generated by the relation r = 1, so it's nice to have
same thing for the noncommutative disc.)

This suggests, by the way, that you understand everything you can
about r = I I* when I is a isometry. What can you say about it?

As for the noncommutative analog of t, well, I'm not so sure
how to think of this as an element of A. But I'm not really
so worried, because even on the ordinary disc D, polar coordinates
define a continuous function r: D -> [0,1] but not a continuous
function t: D -> T. So I expect there to be some nice element
"r" in the algebra A, but not a nice element "t".

>I suppose I could calculate in such an algebra,
>just by not applying any rules I couldn't justify
>(which is pretty much how I do everything in math),
>but you want me to give it a familiar description.

Well, either familiar, or at least satisfying - you may wind up
reinventing some famous piece of mathematics if you don't already
know it, but that's okay.

>I don't think I'm going to get that from the noncommutative disc.

Okay, it's also good to just mess around with the algebra a bit.

>Now, you mentioned before a function from this algebra A to C(T).
>We have this because C(T)'s unitary is also an isometry.
>So, if A is generated by the isometry I
>and C(T) is generated by the unitary U,
>then F(I) = U and F(II*) = 1, although II* != 1.
>F([I,I*]) = 0 even though [I,I*] != 0,
>so F has a nontrivial kernel, in fact a huge kernel.
>In fact, ker F is about the whole problem, so this isn't getting me anywhere.

Oh, it actually is - by focusing attention on the kernel, you reminded
me that the kernel is generated by the relation r = 1, which is
analogous to the equation defining the unit circle as a subspace of
the unit disc. This was the basis of my remarks above... and it
suggests a little more thought about this element r could not be
a bad thing.

>Perhaps I should try to describe I in terms of r and t.
>I should reduce to just t in the case r = 1.
>(This is so F(I) = U = t; in general, F(f) = f(1,.).)
>The simplest way to do this is I = t.
>But t* = 1/t, so t is unitary, so this is wrong.
>While I'm computing t*, I should say r* = r, so r is Hermitean.
>I'm not sure these conclusions are valid
>(could I have said they were normal just because they're complex valued?),
>but since I've never read anything on noncommutative geometry
>I believe I'm entitled to make stuff up if I think it makes sense.
>OK, so I is a combination of r and t, maybe rt.
>This has the value of being the identity function on the commutative disc.
>Also, note that rt = t when r = 1, so F(rt) = U.
>Then II* = rtt*r* = rt(1/t)r = r^2 != 1.
>But I*I = t*r*rt = (1/t)r^2t != 1 either.
>Of course, there are other options than rt
>which will be the identity function on the commutative disc.
>But tr is just as bad, and (sqrt r)t(sqrt r) and (rt + tr)/2 are worse.
>But perhaps an identity function is wrong,
>since its modulus isn't always 1 on the commutative disc.
>This suggests something like rt(1/r), but that doesn't work either.
>Should I try to find a function that reduces to t on the commutative disc?
>And would finding it help me understand better?

Well, I think the problems you're having here are due to the fact
that even in the commutative case, there's no way to factor the
identity function as a product of *continuous* functions r: D -> [0,1]
and t: D -> T. r is okay but t is not. So I think you should give up
trying to find t.

There's also another angle you haven't taken advantage of yet.
By the GNS theorem, we can think of A as a C*-algebra of operators
on some Hilbert space. To understand A, you just need to understand
the C*-algebra of operators generated by an isometry I satisfying
no extra relations beyond those it *has to* satisfy, namely I* I = 1.
Remember, to understand a gadget defined by a universal property, we
just need to find the most vanilla-flavored gadget with that property:
roughly speaking, a gadget with that property and no *extra* properties.

So: what is the most vanilla-flavored isometry in the world?


John Baez

unread,
Jan 15, 2000, 3:00:00 AM1/15/00
to
In article <85o1l6$4...@gap.cco.caltech.edu>,
Toby Bartels <to...@ugcs.caltech.edu> wrote:

>John Baez <ba...@galaxy.ucr.edu> wrote in part:

>>a(ra') = r(aa')

>>(By the way, there are also darn good reasons why people don't talk
>>about algebras over noncommutative rings, so you'll find that defining
>>an "algebra over the quaternions" is a tricky business. There may be
>>some interesting way to do it, but I don't know what it is.)

>In a way, it violates the spirit of a noncommutative ring
>to insist upon the above rule. I know, I know,
>from a categorical perspective, multiplication should be bilinear.
>But that doesn't mean we can't consider cases where it's not.

Ah, there's nothing like an elder sternly wagging his finger
to get a rebellious youth eager to do what was just forbidden.
Most of the time the youth learns the hard way why the forbidden
was forbidden, but occasionally one is sufficiently clever to
do something new and interesting without running into disaster.

Okay, let's see....

>Define a C* algebra A over the quaternions H as follows:
>First, A should be a left and right vector space over H,
>where both vector space structures use the same addition operation.
>A should also be a ring, under that addition operation.
>Then extend associativity by h(ab) = (ha)b, (ah)b = a(hb), and (ab)h = a(bh),
>where (as always here) h in H and a,b in A.

Of course this works far more generally so far. Here's what
I hear you saying: "If we have a noncommutative ring R, the
category of left R-modules is not a monoidal category, so it
makes no sense to define an R-algebra to be a monoid object
in this category. So let's work instead with the category
of R-bimodules. This *is* a monoidal category, so we can
define an R-algebra to be a monoid object in this monoidal
category."

And how could I argue with this? If we're gonna mess with
algebras over noncommutative rings, maybe this is the way to
do it. So let's try it.....

>Let * be an involution on A, with (a + b)* = a* + b*,
>(ab)* = b*a*, (ha)* = a*h*, and (ah)* = h*a*,
>where h* for h in H is the standard conjugation.

I could translate this into category theory too, but it's
probably not worth bothering yet: the bold move was to
study algebras over noncommutative rings; now that I see
there's a sensible category-theoretic way of understanding
them as monoid objects, I am willing to play around with
them a bit in a less formal way.

(If the category theory hadn't worked out, it'd mean your
definition was going against the Tao of Mathematics, fighting
against powerful patterns instead of taking advantage of them,
so you would be beset by problems at every turn, and I would
want nothing to do with it.)

>Let |.| be a function from A to the nonnegative reals.
>Insist |a + b| <= |a| + |b| and the same for multiplication.
>If |a| = 0 implies a = 0, d(a,b) := |a - b| is a metric.
>Insist that this is the case and that the metric is complete.
>Require |ha| = |h||a|, |a*| = |a| and |a*a| = |a|^2.
>I don't think this is always included in C* algebras,
>but let's say there is a multiplicative identity 1,
>with h1 = 1h and |1| = 1. I think that's all.
>Then H itself is a C* algebra over the quaternions.

Okay, great. But having gotten this far, we certainly
don't want to stop here. We should see if we can set up
some useful tools for the people who like quaternionic
quantum mechanics. Adler has a book on this stuff:

Stephen L. Adler, Quaternionic Quantum Mechanics and Quantum Fields,
Oxford U. Press, Oxford, 1995.

but when I looked at it, it seemed the mathematical infrastructure
could really use some work - for example, I didn't see him addressing
the basic problem of how to tensor modules over a noncommutative ring.

Okay, so: given a quaternionic vector space, do we get a
quaternionic algebra of operators on this vector space?
Of course, we get to choose what we mean by "quaternionic
vector space" - either a left H-module or an H-bimodule.

And: given a quaternionic Hilbert space, do we get a
quaternionic C*-algebra of operators on this Hilbert
space?

And - sort of sneaking up on the same questions from the
other side - do n x n quaternionic matrices form a
quaternionic C*-algebra in your sense?

John Baez

unread,
Jan 16, 2000, 3:00:00 AM1/16/00
to
In article <85qu6r$g...@charity.ucr.edu>, John Baez <ba...@galaxy.ucr.edu> wrote:

>Okay, so: given a quaternionic vector space, do we get a
>quaternionic algebra of operators on this vector space?
>Of course, we get to choose what we mean by "quaternionic
>vector space" - either a left H-module or an H-bimodule.

We also get to choose what we mean by "operator"!

>And: given a quaternionic Hilbert space, do we get a
>quaternionic C*-algebra of operators on this Hilbert
>space?

I don't mean to drown Toby Bartels with questions - he's probably
busy working out that stuff about the noncommutative unit disc -
so let me take a crack at these myself.

First of all, if R is a noncommutative ring and M, M' are left
R-modules, the set hom(M,M') of left R-module morphisms is not
a left R-module, unless I'm being dumb. Similarly, if M and M'
are R-bimodules, the set hom(M,M') of R-bimodule morphisms is
not an R-bimodule.

This is a bit of a drag, since it means that the 2 most obvious
ways of defining "quaternionic vector spaces" (as either left
H-modules or H-bimodules) and "operators" between these (as
either left H-module or H-bimodule morphisms) don't give us
a quaternionic vector space of operators from one quaternionic
vector space to another.

(Remember, H means "quaternions".)

There's something else we can do, though. Let's work with
H-bimodules since that's what Toby likes - he essentially
noticed that there's a nice tensor product of H-bimodules,
allowing us to define an algebra over the quaternions to be
a monoid object in the monoidal category of H-bimodules.

What we'd really like is a way to make this monoidal category
into a "closed category" - meaning roughtly that we can define
for any H-bimodules M, M' an H-bimodule called hom(M,M'), which
gets along with the tensor product in a nice way.

I don't see how to do this.

I'm sure any algebraist worth his or her salt could instantly
tell me whether this is possible. If it's not, making the
algebra of operators on a quaternionic Hilbert space into a
quaternionic C*-algebra is doomed to be a rather dismal business.

I wonder how Stephen Adler's book addresses this issue.

>And - sort of sneaking up on the same questions from the
>other side - do n x n quaternionic matrices form a
>quaternionic C*-algebra in your sense?

It seems to me like they do. There's an obvious way to make
n x n quaternionic matrices into an H-bimodule, and ordinary
matric multiplication then makes them into an algebra over the
quaternions, and there's an obvious *-structure (adjoint matrix
together with quaternionic conjugation of each entry), and there's
an obvious norm (the operator norm). I guess one just has to
check the C*-equation

|| A A* || = ||A||^2 .

I'm too lazy to do this, but at least it works in the 1x1 case.

Gerard Westendorp

unread,
Jan 17, 2000, 3:00:00 AM1/17/00
to
John Baez wrote:

> >> It's pretty hard to prove that the only division algebras over
> >> the real numbers are the reals, complexes and quaternions.

I am not an expert on this, but it seems like a really fascinating
subject. So I hope you don't mind me playing around a bit.
I know this has all been done long ago, but some of the fun has been
cut off by the abstract language. Once you know that language, it
will probably be a great help, but for those who don't, it remains
a barrier.

Looking at the algebra of addition and subtraction, it is interesting
that there is a unique real number (x) for each equation.

x=a+b+...

But this algebra works just as well for 2D vectors. Or for
3D vectors. Hey, so this algebra does not define the real
numbers, the integers, or the complex numbers (which are
a case of 2D vectors)

I suppose the next step is the introduction of multiplication/ division.
I remember this seemed pretty weird when I was 4 years old.

Multiplication/division for positive reals is topologically equivalent
to addition/subtraction for positive reals. This is why a slide rule works.

A slide rule uses a "warped" representation of the reals.

Come to think of it,
( I am thinking chronologically here, because the reason maths
is difficult to read is that the mathematicians write things down
*after* they figured everything out. By that time, they are talking
a different language)
The addition/subtraction algebra leads to the introduction of
*negative* numbers.
Interesting here that the addition/subtraction algebra does not
force us to use reals yet. The algebra works perfectly with
whole numbers, all equations with whole numbers have a
unique solution in whole numbers.

But back to the introduction of multiplication/division.
Multiplication probably began as a shorthand for repeated
addition:

3*n = n+n+n

If you strictly adhere to this shorthand, no new algebra is introduced.
But once you start to generalize into

x*m = n

and you want a solution for all cases, you are forced to introduce
rationals, reals, and complex numbers successively.

First, you find yourself introducing the dreaded "fractions"
(rational numbers), by figuring out the solution for x, if m
does not integer-divide n. To the horror of Pythagoras, who
wanted to describe everything in whole numbers, we are also
force to introduce reals. The famous case is were Pythagoras
tried to figure out the length of the hypotenuse of a 1*1 right
angled triangle, using his own theorem.
He tried to figure out a ratio A/B, such that:

1*1 +1*1 = (A/B)*(A/B)

or (A*A) / (B*B) = 2.

Then he wondered if A and B were odd or even numbers.
The number of 2's in A*A must be 1 more than the number
of 2's in B*B. So the number of extra 2's in A is *half* a
2 !!
[We would say 2^(1/2)]

Pythagoras was shocked. I don't think he introduced reals,
probably they were introduced later.

But even reals are not sufficient to generate an algebra that
has a solution for each equation.

x*x = -1

has no solution in reals.
I understand complex numbers were first introduced as
a trick to aid the solving of equations in reals. Later, they
found out that these funny things were quite amazing. In fact,
they have the property that all equations with addition, subtraction,
division and multiplication have a solution. (Except those involving
division by zero)

Unfortunately, complex numbers are hard to visualise. Under addition,
they are just like 2D vectors. But under multiplications, they twist
around in a Mandelbrod-like way. But we are led to them by sheer
logic, so we had better study them.

Another thing that has happened along the way, is that we have lost
the uniqueness:

x*x = 1 has *two* solutions, 1 and -1.

In fact, x^n has n solutions (using ^n here as a shorthand for repeated
multiplication).

Interesting also that the introduction of new operations like
power raising, exp, log, sin functions, do not spoil the
algebraic completeness of the complex numbers. This makes them
amazing, so we should like them.

So are there any other "thingies" that have the same property?
(What is the mathematical name for "thingy", in the sense I am
using it?)

Well, I could build a 2D vector of complex numbers
(c1,c2). The rules for addition and multiplication could be

(c1,c2) + (d1,d2) = (c1+d1,c2+d2)
(c1,c2) * (d1,d2) = (c1*d1,c2*d2)

But I have created something quite boring. It is just 2 sets
of completely independent complex numbers. So I guess
mathematicians will have thought of some definition hat rules
out these uninteresting cases.
Probably, they have said that there is a kind of mapping
between {c} and {c1,c2}.
In fact, if you had an equation in {c}
( I am already beginning to talk like a mathematician!)
you could map the c-numbers onto {c1,c2} for example by
saying:
c -> (c , 2 ) (2 is chosen at random)

Then, you could do the entire equation in {c1,c2}, and solve it.
Then, you could map the solution (c') back into {c}:

(c' , 2 ) -> c'

And you would get just the right answer in {c}.
The existence of such a mapping (functor?) really is a
way of saying the two number systems are equivalent.

So can we find a "thingy" that satisfies the algebra of complex
numbers, but which is *not* toplogically equivalent to {c}?

Just using the examples of whole numbers and reals:

Under addition/substraction, there is no "need" for the reals,
because you can map each whole number to a real, do addition
and subtraction, map back to the whole numbers, and get exactly
the same answer as without the mapping.
But under multiplication, you could try the same mapping,
and get stuck. The reals can do tricks that the wholes can't
do. Interestingly, reals are a *generalisation* of wholes.
For each equation in whole numbers, there is an
equation in reals, but not always vice versa.
Similarly, the complex numbers are a generalisation of
reals. In fact:
N+ -> N -> Z -> R -> C

Are quaternions generalisations of C?
Yes, because (x,y,0,0) behaves just like (x,y). But it only
works for the special case where the 3rd and 4th coefficients
are 0.


hmm..
(This 'hmm' lasted for 2 days)

Before introducing multiplication, you could first define
*repeated addition* completely independent of the
concept of multiplication, using only addition. Let us define
n # x as x+x+x+... repeated n times. (Note that n is an integer
which is not necessarily a member of the set {x})
Next we could use repeated addition to define a *vector space*.
For example, in C, I could choose 2 "generators":
x1 = 0.001
and
x2 = i*0.0009
Then, I could do repeated additions of these two generators.

m # x1 + n# x2

The set of all these possibilities defines a kind of vector space,
which eventually gets pretty near any element of {x}.
We could define the *dimension* as the number of independent
'addition' generators needed to cover the set.

Note also that addition is commutative (by definition). So it does
not matter which 'route' we take:

m # x1 + n# x2 = (m-3) # x1 + n# x2 + 3 # x1 = ...

This is contrary to a curved space, in which translations in
different directions do not commute.


Having defined this vector space, we can now do a multiplication
using the *distributive* property (this property exist by *definition*)


x3 * x = x3 *( m # x1 + n# x2 )
= m # (x3 *x1) + n# (x3 *x2 )


This means that we can figure out multiplication by defining how the
generators behave under multiplication!

Following quaternion notation, we could call the generators
i,j,k,..

We then have to say what happens:
i*i = p1_ii( i ,j, k,...)
i*j = p2_ij( i ,j, k,...)
....

In other words, the product can be written as a tensor

z_k = p_kji y_j x_i

Multiplying both sides by p_ljk:

(p_ljk z_k) =(p_ljk y_j ) (p_kji x_i )

Now, we have a matrix identity!

So it seems that all generalisations of numbers can be
represented by matrices of real numbers!

eg1: the complex numbers:

( x -y )
( y x )

eg2: the quaternions:

( t -x y -z )
( t t y y )
( -y z t x )
( -z -y -x t )

Th converse is probabably not true.

I'll just stop here, for the moment. I need a couple
of days to do other things.


Gerard


Toby Bartels

unread,
Jan 18, 2000, 3:00:00 AM1/18/00
to
John Baez <ba...@galaxy.ucr.edu> wrote:

>Toby Bartels <to...@ugcs.caltech.edu> wrote:

>>I won't work out the proof until John fails to say I'm wrong.

>You won't work it out until I fail to say you're wrong???
>First it's your wonderful phrase "fall out of disfavor", and
>now this. Double negatives are okay, but triple negatives
>turn my brain to mush. I can't even tell if you meant what
>you said or not!

"fallen out of disfavour" was a mistake, but I meant what I said this time.
And it wouldn't have been correct if I'd cancelled any pair of negatives.
"I'll work out the proof unless John says I'm wrong." --
not necessarily; I might still not work it out (0).
"I won't work out the proof until John says I'm right." --
you might be more cryptic and I still work out the proof (1).
"I'll work out the proof unless John fails to say I'm right." --
both objections (0) and (1) apply.
Of course, as the transformation "until" |-> "unless" shows,
"until" is also a negative, so there are 4 negatives.
This gives 3 more ways to cancel a pair of negatives.
"I'll work out the proof if John fails to say I'm wrong." -- objection (0).
"I won't work out the proof if John says I'm wrong." --
true, but the statement I gave gives more information (2).
"I won't work out the proof if John fails to say I'm right." -- objection (1).
Also, cancelling all 4 negatives:
"I'll work out the proof if John says I'm right." -- objections (0) and (1).
Therefore, only my phrasing gives the maximal true information.

>Anyway, I won't refrain from failing to say you're wrong,
>because you're RIGHT. And I'll even sketch the proof.

O, good. You weren't cryptic.

>Suppose u is any unitary in any C*-algebra A. Its spectrum
>lies on the unit circle, so given any continuous function
>f: T -> C we get an element f(u) of A by the functional
>calculus.

OK, the reason I didn't give the proof was that I thought
the functional calculus only applied to *analytic* functions f.
-- or, rather, functions analytic in z and z* --
and I wasn't sure how to extend the proof to any function.

>This gives us a *-homomorphism from C(T) to A
>sending f to f(u). And this *-homomorphism maps your U
>to the element u. And it's the only *-homomorphism with
>this property.

This is the hole in the proof, why it's a sketch
(assuming we can assume all facts about the functional calculus).
Now, an analytic function in z and z* has a Taylor series in them,
so every morphism easily must agree with this on such analytic functions.
To get all continuous functions, I think I'd have to do some analysis,
with norm limits and maybe even an epsilon. I hate analysis.
If I were being graded on this, I'd do it, but ....

>So now we come to the much trickier and more interesting
>problem: what's the free C*-algebra on an isometry? I'll
>give you a mystical-sounding hint: it's the "noncommutative
>unit disc".

I plan to work on this next.


-- Toby
to...@ugcs.caltech.edu


Charles Francis

unread,
Jan 18, 2000, 3:00:00 AM1/18/00
to
In article <85l2i1$c13$1...@rosencrantz.stcloudstate.edu>, thus spake John
Baez <ba...@galaxy.ucr.edu>

>In article <85fki1$19a$1...@news.dtc.hp.com>,
>(Greg Weeks) <we...@golden.dtc.hp.com> wrote:
>
>>My point here
>>is that there is no physics in a C*-algebra alone. To obtain physics, you
>>have to just *know* which elements are observable and how to observe them.
>>The algebra doesn't tell you. It is not self-interpreting.
>
>This is the same for any mathematical theory of physics, or indeed,
>any formal axiomatized theory whatsoever.

Hey come on John, only a little while ago you were giving qualitative
descriptions of the solution of the equations of sound propagation,
based solely on thinking of the way molecules behave!

>For example, even in
>classical mechanics, the mathematical formalism by itself does not
>tell you how to apply that formalism to the real world.

But the point is that this is not being done, either for C*-algebra, or
more generally for quantum mechanics. Instead the formalism stands for
nothing anyone has been able to satisfactorily explain.


>
>>So, as I've said before, I have to disagree with Haag when he says that all
>>the physics can be found in the net of local C*-algebras (ie, the algebras
>>of observables restricted to various finite regions of space-time).
>
>I think statements like "all of physics can be found in [some
>mathematical structure]" are always problematic, for the reason
>I mentioned above: mathematical formalisms are never self-interpreting.

No, but there should still be interpretation.


>
>When people make such statements, they are usually engaged in a kind
>of salesmanship, so one should read what they say as advertising
>rather than statement of fact.
>

So anyone who could ever make such a statement honestly can immediately
be written off as an overpompous physicist, or as a crank. QED.

(Greg Weeks)

unread,
Jan 18, 2000, 3:00:00 AM1/18/00
to
John Baez (ba...@galaxy.ucr.edu) wrote:
: >So, as I've said before, I have to disagree with Haag when he says that all

: >the physics can be found in the net of local C*-algebras (ie, the algebras
: >of observables restricted to various finite regions of space-time).

: When people make such statements, they are usually engaged in a kind


: of salesmanship, so one should read what they say as advertising
: rather than statement of fact.

In this case I'm not sure that Haag doesn't actually believe it. It
appears that you *can* extract collision theory from the algebra net
without any additional physics. If you believed that collision theory was
everything -- and there was a time when some people said so -- then you'd
believe Haag's assertion.


Greg


Toby Bartels

unread,
Jan 18, 2000, 3:00:00 AM1/18/00
to
John Baez <ba...@galaxy.ucr.edu> wrote:

>John Baez <ba...@galaxy.ucr.edu> wrote:

>>Of course, we get to choose what we mean by "quaternionic
>>vector space" - either a left H-module or an H-bimodule.

>We also get to choose what we mean by "operator"!

Precisely.

>I don't mean to drown Toby Bartels with questions - he's probably
>busy working out that stuff about the noncommutative unit disc -
>so let me take a crack at these myself.

Actually, I gratefully set aside the noncommutative disc
to take a look at this, and my answer has already been submitted.

>First of all, if R is a noncommutative ring and M, M' are left
>R-modules, the set hom(M,M') of left R-module morphisms is not
>a left R-module, unless I'm being dumb. Similarly, if M and M'
>are R-bimodules, the set hom(M,M') of R-bimodule morphisms is
>not an R-bimodule.

Right. This comes from requiring too much of Rbimodule morphisms.
If we use a more generous category that never violates the spirit
of noncommutativity, then we do all right.

>What we'd really like is a way to make this monoidal category
>into a "closed category" - meaning roughtly that we can define
>for any H-bimodules M, M' an H-bimodule called hom(M,M'), which
>gets along with the tensor product in a nice way.

>I don't see how to do this.

Me neither, because the definition of morphism is bad.

>>And - sort of sneaking up on the same questions from the
>>other side - do n x n quaternionic matrices form a
>>quaternionic C*-algebra in your sense?

>It seems to me like they do.

Yes. But does Mat_n(H) = End(H^n) in the category of Hbimodules?
No -- unless you redefine a morphism appropriately.
Make Mat_n(H) = End(H^n), and all the problems disappear.


I'm so excited that I actually get to give John hints,
that I'm Ccing this to him to make sure he sees it
before he sees my earlier post that solves everything.


-- Toby
to...@ugcs.caltech.edu


Doug B Sweetser

unread,
Jan 18, 2000, 3:00:00 AM1/18/00
to
Hello Toby:

John Baez wrote a criteria required was:



>>a(ra') = r(aa')

An Toby replied:

>In a way, it violates the spirit of a noncommutative ring
>to insist upon the above rule.

Not surprisingly, I agree with this sentiment. My boast was that the
definition of a C*-algebra over a non-commutative ring - which differs
from the standard definition - would also work over a commutative
one. Rather than bicker about this point, I explicitly showed why
quaternions form a Banach space, what involutions are, and how to
define the norm with appropriate properties.

In the Tao of Mathematics, when confronted with a non-commutative
widget, mathematicians start working with left- and right-handed
widgets. That is Sudbery's approach to quaternion analysis. That is
Toby's approach to C*-algebra over the quaternions. That is John's
approach to applying category theory in this case.

That is not my approach, ever.

There is a symmetric part and an antisymmetric part that taken
together make the whole. This holistic approach groups elements that
have different symmetry properties, unless isolation is demanded.
There was a small discussion in another thread about a unified field
theory written like so: (g, E + B). The electric field is a vector, and
B is a pseudo-vector. It is like saying:

q = t + x i + y j + z k

i hat is not j hat, they are apples that are grouped with oranges
using the "+" sign to define fruit. The set of three involutions
defined in this thread makes it possible to isolate each element of a
quaternion using an automorphism, just like E can be isolated from B
using parity operations.

Look at the matrix representation of a quaternion:

|t -x -y -z|
q = |x t -z y|
|y z t -x|
|z -y x t|

Which part is symmetric? The scalar, the trace, t. Which part is
anti-symmetric? The 3-vector, q - trace(q), (x, y, z). The
3-vector is anti-Hermitian, V* = -V, always. The 3-vector over the
absolute value of the 3-vector is anti-Hermitian and unitary. What is
true for t representing a real number works identically if the
differential operator d/dt is put in its place. The rules for a
C*-algebra of quaternion operators over a quaternion field are the
same as those that apply to quaternion field itself. That is as
self-consistent a system as I can imagine.

No need to look to Stephen Adler. He only uses complex analytic
methods. He is a bright man who can see the limitations of using
handedness to define things, so avoided it entirely.

I have avoided the lingo of tensors, because I get easily confused.
However, a quaternion can be thought of as a mixed tensor, the
combination of a scalar and a 3-vector. Give it 4 indices. Here is a
differential tensor and a potential tensor.

(d/dt, -d/dx, -d/dy, -d/dz) (phi, -Ax, -Ay, -Az)

Now form an anti-symmetric product. All that needs to be done is to
wipe out the scalar:

(d/dt, -d/dx, -d/dy, -d/dz)(phi, -Ax, -Ay, -Az) -
(phi, -Ax, -Ay, -Az)(d/dt, -d/dx, -d/dy, -d/dz)

= (0, -dAx/dt - dphi/dx + dAy/dz - dAz/dy,
-dAy/dt - dphi/dy + dAz/dx - dAx/dz,
-dAz/dt - dphi/dz + dAx/dy - dAy/dx)

| 0 -Ez-Bx -Ey-By -Ez-Bz|
= |Ex+Bx 0 -Ez-Bz Ey+By|
|Ey+By Ez+Bz 0 -Ex-Bx|
|Ez+Bz -Ey-By Ex+Bx 0 |

Here is an anti-symmetric tensor of rank 2 with a trace of zero. If
you cannot see how to do all the work of the field-strength tensor F,
you have the creativity of a postage stamp. Do you want to know what
it takes to make a unified field theory? Keep the stuff you use to
through away.

(d/dt, -d/dx, -d/dy, -d/dz)(phi, -Ax, -Ay, -Az) =

= (dphi/dt - dAx/dx - dAy/dy - dAz/dz,
-dAx/dt - dphi/dx + dAy/dz - dAz/dy,
-dAy/dt - dphi/dy + dAz/dx - dAx/dz,
-dAz/dt - dphi/dz + dAx/dy - dAy/dx)

| g -Ez-Bx -Ey-By -Ez-Bz|
= |Ex+Bx g -Ez-Bz Ey+By|
|Ey+By Ez+Bz g -Ex-Bx|
|Ez+Bz -Ey-By Ex+Bx g |

Duh.


doug swee...@world.com
http://quaternions.com


It is loading more messages.
0 new messages