Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

The problematical nature of photon spin

4 views
Skip to first unread message

Matt McIrvin

unread,
Feb 25, 2000, 3:00:00 AM2/25/00
to
In article <88vmm5$91r$1...@murdoch.acc.Virginia.EDU>, "Douglas A. Singleton"
<da...@erwin.phys.virginia.edu> wrote:

>OK, I'm now going to try
>doing ascii math which is hard to read and boring. It would be
>much better if you could get hold of the Ohanian article
>I incessantly keep pushing (AJP, 54, pg. 500 (1986)), or
>check out Jackson (I think he assigns some of this as a
>problem).

That's one of my favorite papers of all time, if it's the one I'm thinking
of. This is the paper in which Ohanian discusses Belinfante's remarkably
intuitive interpretation of electron spin, as a circulating momentum
density in a Dirac wave packet. Just as in the case of electromagnetic
waves, the stuff tends to twist around the *outside* of a finite wave
function, so if you use an infinite plane wave you will get in trouble:
the important region gets banished to infinity.

Over on sci.physics and sci.chem a little while ago, they were discussing
pedagogical approaches to the nature of electron spin, and some of the
participants in the thread seemed to have fallen prey to one of my least
favorite commonly believed notions: that electron spin is an
unvisualizable, highly abstract entity that is called "angular momentum"
only because its commutation relations have a coincidental resemblance to
the real thing.

Unfortunately, I think that this is taught in many chemistry and physics
classes. Ohanian's paper is a pleasant antidote. The math is a bit beyond
a freshman chem course, as was discussed over there, but the *result* is
worth mentioning. Indeed, Ohanian mentions it briefly in his own freshman
physics text, in the "modern physics" section toward the end (you know,
the part that there's never enough time in the semester to cover).

--
Matt McIrvin http://world.std.com/~mmcirvin/

Charles Francis

unread,
Mar 1, 2000, 3:00:00 AM3/1/00
to
In article <896cso$d2$1...@rosencrantz.stcloudstate.edu>, thus spake Matt
McIrvin <mmci...@world.std.com>

>
>That's one of my favorite papers of all time, if it's the one I'm thinking
>of. This is the paper in which Ohanian discusses Belinfante's remarkably
>intuitive interpretation of electron spin, as a circulating momentum
>density in a Dirac wave packet. Just as in the case of electromagnetic
>waves, the stuff tends to twist around the *outside* of a finite wave
>function, so if you use an infinite plane wave you will get in trouble:
>the important region gets banished to infinity.
>

I can't grasp what your saying here at all, Matt. It seems to depend on
an idea that a wave function is a real entity, not a creation of
mathematics. How can you justify that? And if spin is a circulating
momentum density in a wave packet, how can the spin axis depend on the
direction you yourself are facing? As you turn around, are you magically
manipulating all the electron wave packets in the universe?
--
Regards

Charles Francis
cha...@clef.demon.co.uk


Jonathan Scott

unread,
Mar 2, 2000, 3:00:00 AM3/2/00
to
Charles Francis <cha...@clef.demon.co.uk>
wrote in message news:89jlaj$5ns$1...@rosencrantz.stcloudstate.edu...
...

> I can't grasp what your saying here at all, Matt. It seems to depend on
> an idea that a wave function is a real entity, not a creation of
> mathematics. How can you justify that? And if spin is a circulating
> momentum density in a wave packet, how can the spin axis depend on the
> direction you yourself are facing? As you turn around, are you magically
> manipulating all the electron wave packets in the universe?

The spin axis in the Dirac wave equation has a physical direction in
space. We are used to the fact that a vector changes when seen from
a different frame of reference, but we still know it is the same
vector. A spinor is effectively expressed as a half-angle rotation
relative to an observer-dependent reference direction, so its value
changes in rather a strange way when viewed from different frames of
reference, but its physical meaning is unchanged (except that there
is a 2-1 mapping as one rotates, which I consider an artefact of
the notation).

As an analogy, consider a rather odd way of navigating where you
give instructions as follows: hold up a mirror at n degrees to
the forward direction (in the vertical plane) and then move
in the direction of the reflected forward direction.

This has the same two properties as spinors, in that you need to
know which way the person is facing and you need to specify half
the angle to the required direction, but if the person turns to
a different direction, you can easily adjust the instructions
to refer to the original direction.

Jonathan Scott - jonatha...@attglobal.net


Matt McIrvin

unread,
Mar 2, 2000, 3:00:00 AM3/2/00
to
In article <38bd6...@news3.prserv.net>, "Jonathan Scott"
<jonatha...@ibm.net> wrote:

>A spinor is effectively expressed as a half-angle rotation
>relative to an observer-dependent reference direction, so its value
>changes in rather a strange way when viewed from different frames of
>reference, but its physical meaning is unchanged (except that there
>is a 2-1 mapping as one rotates, which I consider an artefact of
>the notation).

The *overall* phase of the wave function has no physical consequences, and
I suppose that the 2-1 mapping itself probably should be viewed as a
notational artifact-- in a sense it's really a mapping from a whole
circle of overall phases to the same state.

On the other hand, relative phases do matter. That relative minus sign
from a 2pi rotation can actually be seen in interference experiments with
cold neutrons, by sending them through the neutron equivalent of a
Michelson interferometer and putting one of the split beams through a
magnetic device that twists the spin by a full rotation. When the beams
are combined, the interference pattern shifts depending on whether the
twisting field is on or off.

This gets my vote for one of the weirdest experimental results in all of
physics, even though it was not at all unexpected.

Anyway, I now realize that this minus sign in the wave function was what
Charles Francis was probably asking about. To answer his question: it
doesn't affect the momentum density at all... and the phase you'd get
by rotating your coordinate system 360 degrees does not affect anything,
because it is an overall phase that applies to the wave function of the
whole world. Obviously I wouldn't regard this as a physical thing.

Charles Francis

unread,
Mar 3, 2000, 3:00:00 AM3/3/00
to
In article <2000030200...@world.std.com>, thus spake Matt
McIrvin <mmci...@world.std.com>
>In article <89jlaj$5ns$1...@rosencrantz.stcloudstate.edu>, Charles Francis
><cha...@clef.demon.co.uk> wrote:
>
>[about spin as arising from a momentum density in the wave function]
>
>Probably I should have been careful and said something about expectation
>values.

That would be okay. You seemed to be criticising the idea that spin
itself, not the expectation value, is unvisualisable. Had you had a
visualisation for it, I would want to know.
>
>Anyway, spin can be described in this way, just like momentum or energy.
>One difference is that the [various things] include Pauli matrices,
>or, actually, Dirac matrices in the full relativistic treatment,
>but nevertheless it can be thought of as r x P, where P is a density
>with the dimensions of momentum density. If you take the integral of
>P instead of r x P, the "spin" part integrates to zero and you just
>get the expectation value of the total momentum.


>
>>And if spin is a circulating
>>momentum density in a wave packet, how can the spin axis depend on the
>>direction you yourself are facing? As you turn around, are you magically
>>manipulating all the electron wave packets in the universe?
>

>I'm not sure quite what you're referring to here. The spin vector
>corresponding to a given electron state doesn't depend on the direction
>you are facing; its *components* depend on the choice of basis, but that
>is no different from any other vector.
>
Again it was the suggestion that it was visualisable that I picked up
on. An ordinary vector is visualisable, and when I turn around the
components behave in a way that makes sense as the behaviour of object
with a direction in space-time

Charles Francis

unread,
Mar 3, 2000, 3:00:00 AM3/3/00
to
In article <38bd6...@news3.prserv.net>, thus spake Jonathan Scott
<jonatha...@ibm.net>

>
>The spin axis in the Dirac wave equation has a physical direction in
>space. We are used to the fact that a vector changes when seen from
>a different frame of reference, but we still know it is the same
>vector. A spinor is effectively expressed as a half-angle rotation

>relative to an observer-dependent reference direction, so its value
>changes in rather a strange way when viewed from different frames of
>reference, but its physical meaning is unchanged (except that there
>is a 2-1 mapping as one rotates, which I consider an artefact of
>the notation).
>
I don't think your being rigorously consistent in this paragraph. Is it
a physical direction in space, or an artefact of the notation? And if it
is an artefact of the notation, then what are we really talking about
when you strip the notation away?

>As an analogy, consider a rather odd way of navigating where you
>give instructions as follows: hold up a mirror at n degrees to
>the forward direction (in the vertical plane) and then move
>in the direction of the reflected forward direction.
>
>This has the same two properties as spinors, in that you need to
>know which way the person is facing and you need to specify half
>the angle to the required direction, but if the person turns to
>a different direction, you can easily adjust the instructions
>to refer to the original direction.
>

This is either an extremely neat way of explaining the behaviour of
spin, or a wrong analogy. The mirror itself is "interfering" with the
description of the direction, much like the measurement apparatus
"interferes" with the measurement in qm. So it sounds neat. How far can
you push the analogy, and where does it break down?

Gerard Westendorp

unread,
Mar 3, 2000, 3:00:00 AM3/3/00
to
I have now read the article by Ohanian. I am starting to get the point,
although there are some really tricky points that Ohanian does not
explain very clearly. (nice article, though)

For instance, let

( 1 )
A = ( i ) exp(i(t-z))
( 0 )


As Douglas A. Singleton wrote (and so does Ohanian),


> L = r X eps c (E X B)

[..]

> First using standard tricks L above can be split as
>
> r x (E^i grad A^i) + E x A (this is an exercise
> in Jackson and Ohanian
> spells out the steps)
>
> [there should be a volume integral over this --> int d^3 x
> and I'm setting c=1]
>


If you substitute the expression for A in the expressions above,
you get


L = r X eps c (E X B) = 0 (at x=y=0)

( 0 )
E x A = ( 0 ) exp(2i(t-z))
( -2 )


r x (E^i grad A^i) = 0 (at x=y=0)


This seems to be a contradiction, 2 different values according
to which formula you use. Yhis is not cleared up in the
article. However, the contradiction can be traced to the partial
integration. When you do that, the part that you put outside the
integration, should be zero at the boundaries. But in the case of
a plane wave, it is not. This makes the 2nd expression only valid
for a finite wave packet!


One way of seeing it is by saying a circularly polarised photon
has angular momentum h. This a.m. is concentrated near the
"boundaries" of the photon wave function. If you put an extra
photon adjacent to it, you get the effect as in the derivation of
Stokes theorem, the circular fields cancel at the interfaces, and
only at the edges of the total field a net circular field remains.
You can continue this process until you get a large bunch of photons.
You get a net circular energy flow only at the edges of the wave
packet. So the situation for a plane wave is very tricky.

As for the angular momentum density of a dipole radiator, I think Doug
was right that I was wrong. I calculated

L = r X (E X B)

by first doing (E X B), whose tangential component goes to zero at large
distances. But because of the extra r in the expression, L does not
go to zero for large r.

The expression

S = E x A

is not gauge invariant. We could cure this by saying:

S = ( E X dE/dt )/w^2

Gerard


Charles Francis

unread,
Mar 3, 2000, 3:00:00 AM3/3/00
to
In article <2000030205...@world.std.com>, thus spake Matt
McIrvin <mmci...@world.std.com>

>Anyway, I now realize that this minus sign in the wave function was what
>Charles Francis was probably asking about. To answer his question: it
>doesn't affect the momentum density at all...

This isn't my question. It is easy enough to manipulate the formulae and
get the right physical prediction, much harder to say what is actually
going on.

>and the phase you'd get
>by rotating your coordinate system 360 degrees does not affect anything,
>because it is an overall phase that applies to the wave function of the
>whole world.

Not so. Co-ordinate systems are found in matter in the environment of a
particle. When we rotate a co-ordinate system in the mathematics we are
describing the analogue of physically rotating the apparatus used to
determine the co-ordinate system. So we are not rotating the wave
function of the whole world, only that part of it measured by that
apparatus.

Incidentally the wave function of the whole world is, in my view, one of
the ill conceived concepts knocking around on the fringes of proper
science. A co-ordinate system is relative, it gives a numerical
relationship between one bit of matter and another, where one of these
bits of matter includes a clock and frame work for measurement of
position. The wave function gives a probability relationship for what
that value might be in the event of an actual measurement, so is
essentially a relationship between one part of the world and another. So
the phrase "the wave function of the whole world" is in the category of
"green ideas sleep furiously" (Chomsky).

>Obviously I wouldn't regard this as a physical thing.
>

Then you have come full circle, since your original criticism was of
"one of my (i.e. your) least favorite commonly believed notions: that


electron spin is an unvisualizable, highly abstract entity that is
called "angular momentum" only because its commutation relations have a
coincidental resemblance to the real thing".

On the other hand spin can be regarded as a real physical thing. Indeed
I think it must be, since spin does produce real physical results. But
it should not be regarded, like the angular momentum of a spinning ball,
which has its own existence with the ball, but as part of a physical
relationship between apparatus and particle. Like Jonathon Scott's
example with the mirror, when one of two things in a relationship is
rotated, there can be a gearing in the relationship, in the ratio n:2
for integer n in the rotation of the spinor, and this is exactly what we
observe in nature.

This does not fully answer the question, however. We already know the
mathematics of spin, and the notion of gearing perhaps gives us some
insight into how half integral relationships can occur in the physical
world, but it still does not give us the actual mechanism for spin
itself.

Jonathan Scott

unread,
Mar 4, 2000, 3:00:00 AM3/4/00
to

Charles Francis <cha...@clef.demon.co.uk>
wrote in message news:ym+1vrAX...@clef.demon.co.uk...

> In article <38bd6...@news3.prserv.net>, thus spake Jonathan Scott
> <jonatha...@ibm.net>
> > ... A spinor is effectively expressed as a half-angle rotation

> >relative to an observer-dependent reference direction, so its value
> >changes in rather a strange way when viewed from different frames of
> >reference, but its physical meaning is unchanged (except that there
> >is a 2-1 mapping as one rotates, which I consider an artefact of
> >the notation).
> >
> I don't think your being rigorously consistent in this paragraph. Is it
> a physical direction in space, or an artefact of the notation? And if it
> is an artefact of the notation, then what are we really talking about
> when you strip the notation away?

The 2-1 mapping of the absolute phase is the only part which I
consider to be an artefact of the notation.

> >As an analogy, consider a rather odd way of navigating where you
> >give instructions as follows: hold up a mirror at n degrees to
> >the forward direction (in the vertical plane) and then move
> >in the direction of the reflected forward direction.
> >
> >This has the same two properties as spinors, in that you need to
> >know which way the person is facing and you need to specify half
> >the angle to the required direction, but if the person turns to
> >a different direction, you can easily adjust the instructions
> >to refer to the original direction.
> >
> This is either an extremely neat way of explaining the behaviour of
> spin, or a wrong analogy. The mirror itself is "interfering" with the
> description of the direction, much like the measurement apparatus
> "interferes" with the measurement in qm. So it sounds neat. How far can
> you push the analogy, and where does it break down?

You already pushed it too far. The mirror is only a device which
doubles angles in this case.

I'll see if I can think of a better way of explaining it in the next
day or two.

Jonathan Scott - jonatha...@attglobal.net

Gerard Westendorp

unread,
Mar 5, 2000, 3:00:00 AM3/5/00
to
In his article, Ohanian shows that if you calculate the orbital
angular momentum of an electron wave function, you get the correct
value of h_bar/2.

A subtlety in this calculation, is that you only need to know
the momentum density.
Because it involves a *square* of the wave function

phi^dagger (matrix expressions) phi

the momentum density does not behave strangely under rotations, it
is an ordinary vector. Also, it is not a wave, because the phase
is canceled out.

The weirdness only comes into play when you look at the wave function
In early atom theory, it was argued that a wave circulating an
atomic nucleus has wavelength 2 pi r. Therefor, it has momentum
h / (2 pi r). The angular momentum is h/(2 pi), or h_bar.

To get angular momentum h_bar/2, you would need a wavelength
*twice* the circumference of the orbit.
It would seem that there is no way that can exist in 3D space!
Ohanian does not go into this problem.

However, the calculation by Ohanian uses just ordinary 3D
space. The electron wave function consists of 4 complex valued
functions of 3D space (The dirac bi-spinors). This completely
specifies the system. Weird...

On a Mobius strip, you can draw a wave function that has the
required property that the wavelength is twice the circumference.
I have been thinking my brains off on how this relates to the problem.
I'll get back if I figure it out.


Gerard


Gerard Westendorp

unread,
Mar 8, 2000, 3:00:00 AM3/8/00
to
Going back to the original paradox, armed with the formula
for angular momentum density of a plane wave:

S = (E X dE/dt)/w^2

We can now check the angular momentum density in the x-direction,
which should be zero in the receiver and the emitter frame.


(I am notoriously bad at this, so if you are serious
about the subject, don't believe it, especially minus signs!)

emitter frame:

E B dE/dt
-------------------------------------------------------------
x +cos(wt-kz) -sin(wt-kz) -w sin(wt-kz)
y +sin(wt-kz) +cos(wt-kz) +w cos(wt-kz)
z 0 0 0


Poynting A.m.
-------------------------------------------------------------
x 0 0
y 0 0
z 1 1/w


receiver:
The E and B fields have been transformed from formula from
the Feynman Lectures on Physics.
The argument (wt-kx) becomes:
A = gamma^2(wt+v^2wx) - kz

E B dE/dt
-------------------------------------------------------------
x +cos(A) -sin(A) - gamma^2 w sin(A)
y +sin(A)*gamma +cos(A)*gamma + gamma^3 w cos(A)
z +cos(A)*gamma*v -sin(A)*gamma*v + gamma^3 w sin(A)*v


Poynting A.m.
-------------------------------------------------------------
x gamma^2 v -1/w gamma^4 v cos(2A)
y 0 -1/w gamma^3 v sin(2A)
z gamma 1/w gamma^3


So there are time dependent x-components of a.m. The time average
is zero.

So there seems to be no paradox. The whole thing remains pretty hard
to visualise.

Gerard


Gerard Westendorp

unread,
Mar 15, 2000, 3:00:00 AM3/15/00
to
Gerard Westendorp wrote:

> In his article, Ohanian shows that if you calculate the orbital
> angular momentum of an electron wave function, you get the correct
> value of h_bar/2.

[...]

> To get angular momentum h_bar/2, you would need a wavelength
> *twice* the circumference of the orbit.
> It would seem that there is no way that can exist in 3D space!
> Ohanian does not go into this problem.

One way to clear this up might involve wave functions and
operators.

On a solution of the Schrodinger equation phi(x,t), you can
calculate the angular momentum by using the operator:

L^ = r X (ih_bar) d/dr

(You sandwich this operator between a bra and a ket wave function)

The operator above has the problem that in spherical coordinates
(r,theta,psi), the z-component of L^ works out as

L^_z = d/d psi

The eigenfunctions of this operator have the form

phi = f(r,theta) * exp(i n*2pi psi)

This means there is no room for h _bar/2 eigenfunctions!

But:

1) Is the momentum operator still (ih_bar) d/dr for Dirac
waves?

2) What wave function do you operate it on?

and

What wave function do you use for photons when you want
to calculate an expectation value of an operator?
Is it F_mu_nu, or A_mu, or what?

I have been trying to understand spin 1/2 for a long time.

Gerard


Gerard Westendorp

unread,
Mar 17, 2000, 3:00:00 AM3/17/00
to
squ...@my-deja.com wrote:

> Gerard Westendorp <wes...@xs4all.nl> wrote:

> > The eigenfunctions of this operator have the form
> >
> > phi = f(r,theta) * exp(i n*2pi psi)
> >
> > This means there is no room for h_bar/2 eigenfunctions!

> Yes, but this operator is for ORBITAL angular momentum, i.e. the angular
> momentum the particle's motion gives him with respect to the origin. A
> fermion (like an electron or a quark) still has integer orbital angular
> momentum, but its spin - which is the INTRINSIC angular momentum - is
> half-integer.

Yes, but I was discussing an article that claims that you can actually
view intrinsic spin as a due to circular flow of momentum density.
The thread is a bit old, and spread out over some weeks, so you
probably missed this context.

> > 1) Is the momentum operator still (ih_bar) d/dr for Dirac
> > waves?

> The orbital angular momentum - yes. To get the spin too you need to add the
> spin operator.



> > 2) What wave function do you operate it on?

I asked this because the energy momentum tensor of the Dirac field looks
like a complicated expression. But if I just do:

momentum expectation value = < phi | (ih_bar) d/dr | phi >

This is an integral over a volume, so you would expect

momentum density = phi^* (ih_bar) d/dr phi

(phi is a vector with 4 components)

But the expression used in the tensor is much more complicated.
I was hoping that the difference is also the difference that
explains spin 1/2.

> > I have been trying to understand spin 1/2 for a long time.

> I hope you understand it a bit better, now! :-)

Making some progres. Might as well have a serious try this time.

Gerard


Gerard Westendorp

unread,
Mar 23, 2000, 3:00:00 AM3/23/00
to
squ...@my-deja.com wrote:
>
[..]

>
> You are confusing fields with wavefunctions. A Dirac particle wave
> function is a 2-component object whose momentum expectation value may
> be calculated as you describe (considering the sign issue). This object
> cannot be ascribed a momentum density, as it describes a point
> particle. A Dirac field is a 4-component object which can be ascribed a
> momentum density, but it should be calculated differently. The field
> (if quantized) describes a whole flavour of particles.

The wave function describes a point particle, but the particle has
a wave-like nature in that it has an amplitude for being at a certain
point. In fact, it has a probability density for being at each point.
So I think it would be consistent in that regard to interpret

Phi^* d/dr Phi (leaving out factors)

as a momentum density. (For the shrodinger eq., but maybe not Dirac)

As for my confusing fields with wave functions, there
is a Phi in the Dirac equation and a Phi in the Pauli equation that should
be treated differently.
What I mean by "wave function" is the Phi in the Dirac equation. I am
re-reading this stuff right now. There is a conserved current whose
t-component is Phi^* Phi, suggesting a probability interpretation. But
then there is the stuff about negative energy.

One thing I don't get yet is what function to operate on to produce
observable expectation values, consistent with the Dirac equation.

Gerard


Gerard Westendorp

unread,
Mar 23, 2000, 3:00:00 AM3/23/00
to
"(Greg Weeks)" wrote:

[...]

>
> What about quantum field theory? It is usually suggested that the angular
> momentum density has both an orbital and a spin part (eg, equation 3-155 of
> Itzykson and Zuber; see also Ryder, Bjorken and Drell, etc):
>
> M(m,a,b) = x(a)T(m,b) - x(b)T(m,a) + S(m,a,b)
>
> THIS IS WRONG. In fact, just as in classical field theory, angular
> momentum density is obtained solely from the energy-momentum density:
>
> M(m,a,b) = x(a)T(m,b) - x(b)T(m,a)
>
> Isn't that COOL?

Yes!
But I wouldn't say this is quantum field theory. It seems that it is
the Dirac equation which leads to the result. This is relativistic
quantum mechanics, but I am making this point because if there is
a continuous field as a function of space/time, there should be a
way of visualising how the field gives rise to spin 1/2.


Gerard

I'll be back!
(A.Swarzenegger)


squ...@my-deja.com

unread,
Mar 24, 2000, 3:00:00 AM3/24/00
to
In article <38D89031...@xs4all.nl>,
Gerard Westendorp <wes...@xs4all.nl> wrote:

> squ...@my-deja.com wrote:
> > You are confusing fields with wavefunctions. A Dirac particle wave
> > function is a 2-component object whose momentum expectation value
> > may be calculated as you describe (considering the sign issue).
> > This object cannot be ascribed a momentum density, as it describes
> > a point particle. A Dirac field is a 4-component object which can
> > be ascribed a momentum density, but it should be calculated
> > differently. The field (if quantized) describes a whole flavour of
> > particles.
>
> The wave function describes a point particle, but the particle has
> a wave-like nature in that it has an amplitude for being at a certain
> point. In fact, it has a probability density for being at each point.
> So I think it would be consistent in that regard to interpret
>
> Phi^* d/dr Phi (leaving out factors)
>
> as a momentum density.

No, it wouldn't be. The wavelike nature doesn't suggest a momentum
distribution. The momentum is still concentrated in a single point,
this point being undefined. Notice the difference (that is, different
singular distributions have different probabilities, but non-singular
distributions are not allowed).
Acually, the situation is a bit more complicated, as momentum
distributions have no meaning for a single particle. The problem is the
Heisenberg uncertainty, which doesn't allow constructing momentum
density eigenvectors with a single particle. In QFT, on the other hand,
such eigenvectors may be constructed, but they are not eigenvectors of
the particle number.

> As for my confusing fields with wave functions, there is a Phi in the
> Dirac equation and a Phi in the Pauli equation that should be treated
> differently. What I mean by "wave function" is the Phi in the Dirac
> equation.

It is usually a Psi :-) and it is not a wavefunction. The Dirac field
describes a whole flavour rather than single particle (when quantized!).

> I am re-reading this stuff right now. There is a conserved current
> whose t-component is Phi^* Phi, suggesting a probability
> interpretation. But then there is the stuff about negative energy.

A more modern approach would be that Psi*Psi is the charge density. The
problems with the negative energy are those which lead to the creation
of this approach - quantum field theory.

> One thing I don't get yet is what function to operate on to produce
> observable expectation values, consistent with the Dirac equation.

The 2-component function I described! Notice, that this wavefunction
doesn't correspond to an eigenvector of the field, so you should be
careful not to confuse it with the Dirac field itself.

Regards, squark.


(Greg Weeks)

unread,
Mar 24, 2000, 3:00:00 AM3/24/00
to
Gerard Westendorp (wes...@xs4all.nl) wrote:
: But I wouldn't say this is quantum field theory. It seems that it is

: the Dirac equation which leads to the result. This is relativistic
: quantum mechanics, but I am making this point because if there is
: a continuous field as a function of space/time, there should be a
: way of visualising how the field gives rise to spin 1/2.

I should perhaps thump the fact that Dirac's equation has two quite
different interpretations.

Dirac originally viewed Psi as a wave-function of a single electron.

Eventually Psi came to be viewed as a quantized field, just as the
electromagnetic field is a quantized field. This is the form Psi takes
in QED.

In my postings, I only discuss the latter view of Psi. Dirac's original
view is not 100% correct, and I've never been able to know when to trust
it. So I pretty much just ignore it.


Greg


Gerard Westendorp

unread,
Mar 24, 2000, 3:00:00 AM3/24/00
to
squ...@my-deja.com wrote:

[..]

> The wavelike nature doesn't suggest a momentum
> distribution. The momentum is still concentrated in a single point,
> this point being undefined. Notice the difference (that is, different
> singular distributions have different probabilities, but non-singular
> distributions are not allowed).

Why are they not allowed?

[..]


> > What I mean by "wave function" is the Phi in the Dirac
> > equation.
>
> It is usually a Psi :-) and it is not a wavefunction. The Dirac field
> describes a whole flavour rather than single particle (when quantized!).

Do you mean they could be electrons, quarks, etc?
Or I guess you are referring to 2nd quantisation. In my reply to Greg Weeks
I explain why I think a function Psi(t,x,y,z) is not a representation of
a second quantised theory.


[..]


>
> > One thing I don't get yet is what function to operate on to produce
> > observable expectation values, consistent with the Dirac equation.
>
> The 2-component function I described!

You didn't describe it, you just said it was a 2-component function...


Gerard


John Baez

unread,
Mar 24, 2000, 3:00:00 AM3/24/00
to
In article <8bdum4$eq5$1...@news.dtc.hp.com>,
(Greg Weeks) <we...@golden.dtc.hp.com> wrote:

> Dirac originally viewed Psi as a wave-function of a single electron.
>
> Eventually Psi came to be viewed as a quantized field, just as the
> electromagnetic field is a quantized field. This is the form Psi takes
> in QED.
>
>In my postings, I only discuss the latter view of Psi. Dirac's original
>view is not 100% correct, and I've never been able to know when to trust
>it. So I pretty much just ignore it.

I'm not sure what you mean by "not 100% correct", but when we build
the Fock space for electrons, on which the quantized field Psi
acts as an operator, we do so taking the space of electron
wavefunctions - i.e., solutions of the Dirac equation - as our
single-particle Hilbert space. So both views fit together, hand
in glove! The reason the free quantized field Psi satisfies the
Dirac equation is precisely because the electron wavefunctions do.

In this respect, the Dirac equation is just like the Klein-Gordon
equation or Maxwell's equations. In any free field theory, the
Hilbert space on which the quantum field acts as operators is
obtained by applying the Fock construction to the Hilbert space
of "single-particle wavefunctions", which are solutions of the
differential equation in question. (Of course you have to use
the fermionic Fock space in the Dirac case, and the bosonic Fock
space in the Klein-Gordon and Maxwell cases.)


Gerard Westendorp

unread,
Mar 25, 2000, 3:00:00 AM3/25/00
to
"(Greg Weeks)" wrote:
>
> Gerard Westendorp (wes...@xs4all.nl) wrote:
> : But I wouldn't say this is quantum field theory. It seems that it is
> : the Dirac equation which leads to the result. This is relativistic
> : quantum mechanics, but I am making this point because if there is
> : a continuous field as a function of space/time, there should be a
> : way of visualising how the field gives rise to spin 1/2.
>
> I should perhaps thump the fact that Dirac's equation has two quite
> different interpretations.
>
> Dirac originally viewed Psi as a wave-function of a single electron.
>
> Eventually Psi came to be viewed as a quantized field, just as the
> electromagnetic field is a quantized field. This is the form Psi takes
> in QED.
>
> In my postings, I only discuss the latter view of Psi. Dirac's original
> view is not 100% correct, and I've never been able to know when to trust
> it. So I pretty much just ignore it.
>

As I understand it:

A 1 particle state, in position representation, would look like:

Psi(t,x,y,z)

A 2 particle state:

Psi(t1,x1,y1,z1,t2,x2,y2,z2)

(I am not sure about the 2 different t's)

QFT describes an n-particle system, with arbitrarily large n. But even
in QFT (non interacting), states with 1 particle are allowed, so we wouldn't
be violating QFT to talk about our wave function.

So I think the function Psi(t,x,y,z) can be used in compliance with
QFT,
but because it is only a 1 particle function, we might as well leave
QFT out of this.


Gerard


Gerard Westendorp

unread,
Mar 27, 2000, 3:00:00 AM3/27/00
to
John Baez wrote:

[..]


>
> , but when we build
> the Fock space for electrons, on which the quantized field Psi
> acts as an operator, we do so taking the space of electron
> wavefunctions - i.e., solutions of the Dirac equation - as our
> single-particle Hilbert space. So both views fit together, hand
> in glove! The reason the free quantized field Psi satisfies the
> Dirac equation is precisely because the electron wavefunctions do.

[..]

I am getting confused as to definitions. Below are a few possible ones.
I don't know who is using which, perhaps there are standard ones.

Field
1) any function of space/time
2) a second quantised "thingy"

Wavefunction
1) A state written in coordinate representation
2) A solution to the differential (wave) equation

The thing I am looking for is a wave-function-like description
of a single-particle state of an electron. This function must
have the property that you can operate observables on it, and
obtain expectation values. Using this "wave function", I want
to try and understand the structure of spin 1/2. (As discussed,
the spin 1/2 might be contained in the wave function, rather than
being postulated separately)

And, I need the correct observable for momentum, if it is different
for spin (1/2) than in the Schrodinger equation.

Gerard

btw, in my QFT book (Hatfield) I have seen the Dirac equation
as an "operator equation of motion". The Psi is printed in bold
letters, meaning "operator" instead of "function".


Matt McIrvin

unread,
Mar 27, 2000, 3:00:00 AM3/27/00
to
In article <8bgk97$frq$1...@pravda.ucr.edu>, ba...@galaxy.ucr.edu (John Baez) wrote:

>In article <8bdum4$eq5$1...@news.dtc.hp.com>,
>(Greg Weeks) <we...@golden.dtc.hp.com> wrote:
>

>>In my postings, I only discuss the latter view of Psi. Dirac's original
>>view is not 100% correct, and I've never been able to know when to trust
>>it. So I pretty much just ignore it.
>

>I'm not sure what you mean by "not 100% correct" [...]

I think he's talking about the *original* original Dirac theory, in
which there were negative-energy solutions and headache-inducing
things like Klein's paradox, which you hacked around by introducing
the filled sea of negative-energy electrons, at which point the positron
appeared as a positive-energy "hole." This machinery has fallen out
of favor.

However, it's worth mentioning that the notion of a momentum density that
yields spin angular momentum actually does not depend on any of that
stuff! You can get it out of *nonrelativistic* Schrodinger theory if you
use a two-component wave function, treat grad^2 as
(grad.sigma)(grad.sigma) (one can motivate this by the fact that it gives
the right leading term in the magnetic moment under minimal EM coupling,
in the tradition of Feynman and Sakurai), then treat it as a field theory
and take the symmetrization of the canonical stress-energy tensor. You
get a weird-looking term in the momentum density that yields the spin when
you integrate r x P, for any properly normalized wave function.

Relativity sort of slips in over the transom here in the symmetrization
process-- and you have to assume that the energy is dominated by mc^2!
The derivation is perhaps best thought of as the appropriate limit of the
Dirac case. But it does emphasize that most of the peculiar things about
the Dirac equation are not important here.

I don't remember whether Ohanian mentions this version of the derivation
in his paper.

Of course all the interpretational quibbles that squark and others have
been arguing about appear here too. Is it a momentum density or an
expectation-value density? Is the wave function to be treated as the
state of a single particle, or as a classical field, or as a quantum
field?

Greg Weeks

unread,
Mar 27, 2000, 3:00:00 AM3/27/00
to
John Baez (ba...@galaxy.ucr.edu) wrote:
: In this respect, the Dirac equation is just like the Klein-Gordon
: equation or Maxwell's equations. In any free field theory, the
: Hilbert space on which the quantum field acts as operators is
: obtained by applying the Fock construction to the Hilbert space
: of "single-particle wavefunctions", which are solutions of the
: differential equation in question.

And yet Maxwell, who had a thorough understanding of these solutions,
didn't know about photons. Similarly, it isn't clear that an examination
of the c-number Dirac equation will clarify the nature of electron spin
(which was the topic at hand).

Confusion arises because the c-number Dirac equation was originally
formulated as a quantum theory, unlike the Maxwell equations. (Hence the
term second quantization, of course.) Like you, I tend to view the two
c-number equations as being on similar footings. I wouldn't accept the
classical Maxwell field as the position-space wave-function of a photon.
[It doesn't conserve probability. Besides, I know that there *is* no
position-space wave-function for photons.] Similarly, I don't trust the
interpretation of the Dirac field as the wave-function of an electron. [It
has negative-energy solutions. Besides, I know that there *is* no entirely
satisfactory position-space wave-function for electrons.]

That aside, here's a bit of propaganda for the general readership: For the
majority of applications of quantum field theory, there is no need to learn
about the "second quantization" of free field theories, and few textbooks
rely on it. The Fock space is the *outcome* of either canonical
quantization or path-integral quantization. You get it for free.


Greg


squ...@my-deja.com

unread,
Mar 29, 2000, 3:00:00 AM3/29/00
to
In article <38DB2451...@xs4all.nl>,

Gerard Westendorp <wes...@xs4all.nl> wrote:
> squ...@my-deja.com wrote:
> > The wavelike nature doesn't suggest a momentum
> > distribution. The momentum is still concentrated in a single point,
> > this point being undefined. Notice the difference (that is, different
> > singular distributions have different probabilities, but non-singular
> > distributions are not allowed).
>
> Why are they not allowed?

Beacuse the particle creation operator in point x commutes with the momentum
density in any point other than x, therefore, adding a particle in point x to
the state cannot change the momentum density elsewhere, and all the
particle's momentum is concentrated in x.

> > > What I mean by "wave function" is the Phi in the Dirac
> > > equation.
> >
> > It is usually a Psi :-) and it is not a wavefunction. The Dirac field
> > describes a whole flavour rather than single particle (when quantized!).
>
> Do you mean they could be electrons, quarks, etc?
> Or I guess you are referring to 2nd quantisation.

Of course! Relativistic quantum mechanics doesn't make much sense without
quantum fields.

> In my reply to Greg Weeks I explain why I think a function Psi(t,x,y,z) is not a representation of a second quantised theory.

Psi is an operator distribution. It repersents the quantized Dirac field, and
as under quantization, all observables become operators, the Dirac field
becomes an operator too. That is called a "quantum field".

> > > One thing I don't get yet is what function to operate on to produce
> > > observable expectation values, consistent with the Dirac equation.
> >
> > The 2-component function I described!
>
> You didn't describe it, you just said it was a 2-component function...

Fair enough :-). Well, this is a function which assings to each point in
space a "Weyl spinor" - i.e. a two component object, which changes in the
fundumental representation of SU(2) under rotations - i.e. the SU(2) matrices
simply multiply it - and whose Lorentz boost transformations are generatored
by i times the Pauli matrices. The result is a reperesentation of SU(2)XSU(2)
- the universal covering group of SO(3,1) (The Lorentz transformations). In
other words, when ingnoring Lorentz boots, in is just the usual wavefunction
of a spin-1/2 particle - just like in non-relativistic quantum mechanics.

Regards, squark.

Toby Bartels

unread,
Mar 29, 2000, 3:00:00 AM3/29/00
to
Greg Weeks <we...@golden.dtc.hp.com> wrote:

>That aside, here's a bit of propaganda for the general readership: For the
>majority of applications of quantum field theory, there is no need to learn
>about the "second quantization" of free field theories, and few textbooks
>rely on it. The Fock space is the *outcome* of either canonical
>quantization or path-integral quantization. You get it for free.

Indeed, *every* quantisation scheme I know of results in a Fock space.
L^2(R) is the symmetric Fock space of C, for example.
It's just that most people don't think of some of them as Fock spaces.


-- Toby
to...@ugcs.caltech.edu


Charles Francis

unread,
Mar 29, 2000, 3:00:00 AM3/29/00
to

In article <8bosmg$b...@gap.cco.caltech.edu>, thus spake Toby Bartels
<to...@ugcs.caltech.edu>
It appears to me more useful, more intuitive certainly, to work the
other way round. Establish Fock space empirically as a categorisation
scheme for measurement results, then demonstrate the commutation
relations. Then it is the quantisation which comes for free; that is the
bit I gagged on as an undergraduate because no mathematical reason for
it was presented. I gather, from a friend who is a lecturer, that the
same is true of undergraduates now. Undergraduate mathematicians of any
ability leave the physics courses in droves because they are not precise
enough in their expression. Not a healthy situation for theoretical
physics.

-

Aaron Bergman

unread,
Mar 31, 2000, 3:00:00 AM3/31/00
to
In article <8bnilo$o93$1...@nnrp1.deja.com>, squ...@my-deja.com wrote:
>
>Fair enough :-). Well, this is a function which assings to each point in
>space a "Weyl spinor" - i.e. a two component object, which changes in the
>fundumental representation of SU(2) under rotations - i.e. the SU(2) matrices
>simply multiply it - and whose Lorentz boost transformations are generatored
>by i times the Pauli matrices. The result is a reperesentation of SU(2)XSU(2)
>- the universal covering group of SO(3,1) (The Lorentz transformations).

This isn't true. It's a fairly common confusion, however, that
has to do with the difference between real and complex forms of
Lie algebras.

SU(2)xSU(2) = Spin(4) which is the universal cover of SO(4).

If we look at the Lie algebra for SO(1,3), ie so(1,3) and
complexify, ie tensor with C, we get

so(1,3) (x) C = so(4,C)

So, on the algebra level, things should look pretty much the
same. However, if we try to exponentiate, we run into problems.

The big problem is, as always, that the Lie algebra is only
sensitive to the local structure of the group. So, in addition to
getting the universal cover, there is an amiguity from the fact
that SO(1,3) has 4 disconnected components. This means that there
are, in fact, 8 possible double covers of the group.

Anyways, if the indices on a 3+1 D Weyl spinor aren't SU(2)
indices, what are they? They're SL(2,C) indices as we can see
from the following homomorphism:

v = (a,b,c,d)

M = (a - b c + id ) M = M^+
(c - id a + b )

Also, let U be an SL(2,C) matrix. Now, we note that det M =
|v|^2. So, let M -> UMU^+. We see that this preserves the fact
that M is Hermitian and it preserves the determinant, so it acts
as a Lorentz transformation on v. So, the undotted indices are in
the fundamental of SL(2,C) and the dotted indices are in the
conjugate transpose representation of SL(2,C). The vector rep is
then formed by taking the tensor product of these two reps and
imposing the Hemiticity condition.

It's a bit late tonight to go through all of this with gamma matrices,
but you can see that the matrix M above has the pauli matrices
hiding in it and you should be able to fit them together to make
the 3+1 D Clifford algebra.

Some of the group theory is in Sternberg's book _Group Theory and
Physics_. A really nice review of Clifford algebras is West's
Review hep-th/981101.

Aaron
--
Aaron Bergman
<http://www.princeton.edu/~abergman/>


Walter Kunhardt

unread,
Mar 31, 2000, 3:00:00 AM3/31/00
to
On Wed, 29 Mar 2000, Toby Bartels wrote:

> ...


> Indeed, *every* quantisation scheme I know of results in a Fock space.
> L^2(R) is the symmetric Fock space of C, for example.
> It's just that most people don't think of some of them as Fock spaces.
>

Playing the devil's advocate for this time:

Why is L^2(R) the symmetric Fock space over C ? Isn't a Fock space always
the direct sum of all possible tensor powers of a given Hilbert space?
So where's the N-grading in L^2(R)?

(I'm too lazy to write a long post on the Weyl algebra, harmonic
oscillator, canonical commutation relations and all that now.
But the answer "all infinite-dimensional separable Hilbert spaces are
isomorphic" would definitely be too short.)


Regards,

Walter Kunhardt.

squ...@my-deja.com

unread,
Mar 31, 2000, 3:00:00 AM3/31/00
to
In article <8bter3$d0u$1...@rosencrantz.stcloudstate.edu>,

Charles Francis <cha...@clef.demon.co.uk> wrote:
> It appears to me more useful, more intuitive certainly, to work the
> other way round. Establish Fock space empirically as a categorisation
> scheme for measurement results, then demonstrate the commutation
> relations.

It seems to me, that the commutation relations are in this phase an
axiom. That is, you must take either the de Broglie relations, or the
correspondence between Noether charges and symmetry generators, or
simply the canonical commutation relations for granted. The mere fact
we have a Fock space doesn't ensure us anything (unless you assume the
position and momenta operators to be something specific, which is again
equivalent to the initial assumption).

Regards, squark.


Sent via Deja.com http://www.deja.com/
Before you buy.


Aaron Bergman

unread,
Mar 31, 2000, 3:00:00 AM3/31/00
to
In article <slrn8e3cln....@tree1.Stanford.EDU>,
aber...@princeton.edu wrote:

> A really nice review of Clifford algebras is West's Review hep-th/981101.

This should be 9811101.

Gerard Westendorp

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to
squ...@my-deja.com wrote:
[..]

>
> Fair enough :-). Well, this is a function which assings to each point in
> space a "Weyl spinor" - i.e. a two component object, which changes in the
> fundumental representation of SU(2) under rotations - i.e. the SU(2) matrices
> simply multiply it - and whose Lorentz boost transformations are generatored
> by i times the Pauli matrices.

I thought Pauli matrices are SU(2) matrices?

The result is a reperesentation of SU(2)XSU(2)

> - the universal covering group of SO(3,1) (The Lorentz transformations). In
> other words, when ingnoring Lorentz boots, in is just the usual wavefunction
> of a spin-1/2 particle - just like in non-relativistic quantum mechanics.
>


Do you mean the the wave function in the Pauli equation?

Gerard


Charles Francis

unread,
Apr 1, 2000, 3:00:00 AM4/1/00
to
In article <8bvhh5$ko8$1...@nnrp1.deja.com>, thus spake squ...@my-deja.com
Position and momentum are something specific. Fock space is defined by
using states of exact position as a basis of one particle Hilbert space
and building it up from there. Momentum is defined by taking the Fourier
transform, a straightforward linear combination of position states.

You may say this is equivalent to the commutation relations, but
generally we choose axioms which are simple and obvious and prove
theorems from them, even though technically it may be possible to prove
axioms from theorems. If you just take commutation relations as a
starting point, then you may find the rules, but you do not find out
what the operators actually are.

Michael Weiss

unread,
Apr 3, 2000, 3:00:00 AM4/3/00
to

Charles Francis wrote:

| Position and momentum are something specific. Fock space is defined by
| using states of exact position as a basis of one particle Hilbert space
| and building it up from there. Momentum is defined by taking the Fourier
| transform, a straightforward linear combination of position states.

Idiosyncratic terminology. The usual definition of Fock space in no way
requires that you start with any particular basis. (I believe there is no
historic justification for such a statement either.) In fact, most of the
treatments I've looked at emphasize the momentum representation rather than
the position representation --- when they emphasize any particular basis at
all.

| You may say this is equivalent to the commutation relations, but
| generally we choose axioms which are simple and obvious and prove
| theorems from them

Purely from a pedagogical standpoint, you will find partial support in
Weinberg's book on QFT. He develops Fock space first and defines the
annihilation and creation operators (for momentum eigenstates), and then uses
these definitions to derive the commutation relations. He notes that this is
the reverse of the usual order. In the preface he gives his reasons:

[The traditional approach] is certainly a way of getting rapidly
into the subject, but it seems to me that it leaves the reflective
reader with too many unanswered questions. Why should we
believe in the rules of canonical quantization or path integration?
Why should we adopt the simple field equation and Lagrangians
that are found in the literature? For that matter, why have fields
at all?


squ...@my-deja.com

unread,
Apr 3, 2000, 3:00:00 AM4/3/00
to
In article <SIxdUEAy...@clef.demon.co.uk>,

Charles Francis <cha...@clef.demon.co.uk> wrote:
> Position and momentum are something specific. Fock space is defined by
> using states of exact position as a basis of one particle Hilbert
> space and building it up from there. Momentum is defined by taking
> the Fourier transform, a straightforward linear combination of
> position states.
> You may say this is equivalent to the commutation relations, but
> generally we choose axioms which are simple and obvious and prove
> theorems from them, even though technically it may be possible to
> prove axioms from theorems.

The Noether charges vs. symmetry generators assumption seems natural to
me.

> If you just take commutation relations as a starting point, then you
> may find the rules, but you do not find out what the operators
> actually are.

What they actually are?! Obviously, you may only define this up to a
unitary equivalence, and that the commutation relations definetely
provide. In other words, you construct a certain basis (the Fock one,
for instance), and in this basis, you may say what the
operators "actually are".

Regards, squark.

What

squ...@my-deja.com

unread,
Apr 3, 2000, 3:00:00 AM4/3/00
to
> In article <8bnilo$o93$1...@nnrp1.deja.com>, squ...@my-deja.com wrote:
> > ...The result is a reperesentation of SU(2)X SU(2)

> > - the universal covering group of SO(3,1) (The Lorentz
> > transformations).
>
> This isn't true. It's a fairly common confusion, however, that
> has to do with the difference between real and complex forms of
> Lie algebras.
>
> SU(2)xSU(2) = Spin(4) which is the universal cover of SO(4).

You are right of course. I apologize.

Regards, squark.

Michael Weiss

unread,
Apr 3, 2000, 3:00:00 AM4/3/00
to
Greg Weeks started this particular hare, is that right? Well, John Baez
started the thread, talking about the relation between the 1-particle space,

the Fock space, and wave-equations, but Greg said:

| That aside, here's a bit of propaganda for the general readership: For the
| majority of applications of quantum field theory, there is no need to learn
| about the "second quantization" of free field theories, and few textbooks
| rely on it. The Fock space is the *outcome* of either canonical
| quantization or path-integral quantization. You get it for free.

I have the impression Greg was voicing a pedagogical opinion, though somewhat
obscurely. I don't *think* we've had a technical disagreement yet, though
perhaps I've missed something.

I don't know if my limited knowledge of QFT entitles me to an opinion, but
from where I sit now, I'd be inclined to disagree. Perhaps when I understand
interacting theories better, I will rue the time I spent gaining intuition
about Fock space and the free field case. But I doubt it.


Toby Bartels

unread,
Apr 4, 2000, 3:00:00 AM4/4/00
to
Walter Kunhardt <kunh...@Theorie.Physik.UNI-Goettingen.DE> wrote:

>Toby Bartels <to...@ugcs.caltech.edu> wrote:

O, and you think I'm *not* too lazy?
Maybe I'll write a *short* post, and see how you like it!
Anyway, the Ngrading is just the division into
eigenstates of the harmonic oscillator Hamiltonian.
The state of n bosonic notons is equivalent to
the harmonic oscillator state with energy omega(n + 1/2).
As R^2 is the phase space for a harmonic oscillator
and C is the complexification of R^2,
this is exactly relevant.


-- Toby
to...@ugcs.caltech.edu


squ...@my-deja.com

unread,
Apr 4, 2000, 3:00:00 AM4/4/00
to
In article <38E5BF73...@xs4all.nl>,

Gerard Westendorp <wes...@xs4all.nl> wrote:
> squ...@my-deja.com wrote:
> [..]
>
> >
> > Fair enough :-). Well, this is a function which assings to each
> > point in space a "Weyl spinor" - i.e. a two component object, which
> > changes in the fundumental representation of SU(2) under rotations -
> > i.e. the SU(2) matrices simply multiply it - and whose Lorentz
> > boost transformations are generatored by i times the Pauli
> > matrices.
>
> I thought Pauli matrices are SU(2) matrices?

That's right - see Bergman's reply. The result is not the SU(2) X SU(2)
group, but the Spin(4) group. The rotation SU(2), however, is still a
subgroup of Spin(4) generated by the Pauli matrices.

> > In other words, when ingnoring Lorentz boots, in is just the usual
> > wavefunction of a spin-1/2 particle - just like in non-relativistic
> > quantum mechanics.
> >
>
> Do you mean the the wave function in the Pauli equation?

That's right!

Best regards, squark.


John Baez

unread,
Apr 5, 2000, 3:00:00 AM4/5/00
to
Walter Kunhardt <kunh...@Theorie.Physik.UNI-Goettingen.DE> wrote:

>Why is L^2(R) the symmetric Fock space over C ?

Because God made it so!

>Isn't a Fock space always
>the direct sum of all possible tensor powers of a given Hilbert space?

Yes - symmetrized tensor powers, if we're talking about a bosonic
Fock space (as we are).

>So where's the N-grading in L^2(R)?

As Toby noted it comes from the eigenvalues of the harmonic
oscillator Hamiltonian. In fact, if we start with any real
Hilbert space H, and let CH be its complexification, there is
a natural isomorphism between L^2(H) and the Fock space over
CH. Here L^2(H) can be defined using Lebesgue measure if H
is finite-dimensional, and using fancier tricks otherwise.

Note that by saying there's a *natural* isomorphism I'm asserting
much more than just the existence of an isomorphism - which, as
you noted, wouldn't be very exciting.

The guy who worked this out in excruciating detail was Irving
Segal - check out the book "An Introduction to Algebraic and
Constructive Quantum Field Theory". There's a big section
on the isomorphism between the "particle representation" (or
Fock representation) and the "real wave representation" (or
Schroedinger representation) of the canonical commutation
relations.

Charles Francis

unread,
Apr 6, 2000, 3:00:00 AM4/6/00
to
In article <8c59dj$2jr$1...@bob.news.rcn.net>, thus spake Michael Weiss
<mic...@spamfree.net>

>
>Charles Francis wrote:
>
>| Position and momentum are something specific. Fock space is defined by
>| using states of exact position as a basis of one particle Hilbert space
>| and building it up from there. Momentum is defined by taking the Fourier
>| transform, a straightforward linear combination of position states.
>
>Idiosyncratic terminology. The usual definition of Fock space in no way
>requires that you start with any particular basis. (I believe there is no
>historic justification for such a statement either.) In fact, most of the
>treatments I've looked at emphasize the momentum representation rather than
>the position representation --- when they emphasize any particular basis at
>all.

Of course, from a purely mathematical point of view, it makes no
difference to the idea of a vector space what basis you use. And in the
infinitely dimensional Hilbert space generally used, there is some
awkwardness in describing position states, because of the inner product.
But if you use a finite dimensional representation momentum and position
are not symmetrical, and a choice has to be made. I have tried it both
ways. When I first used non-standard analysis to describe a
representation of a rigged Hilbert space, I chose momentum as the more
fundamental. But I have since concluded that position is fundamental
because it is the property which is directly measured. Momentum is
measured by a combination of measurements over time, and is properly
regarded as a linear combination of position.

The lack of symmetry is shown in the fact that while position is
discrete, momentum space is the unit circle. Then momentum states span
the Hilbert space, but there cardinality is infinite, so they are not a
basis. This lack of symmetry in the mathematical structure reflects a
lack of symmetry in the physics, and seems to improve the integrity of
the model.

>| You may say this is equivalent to the commutation relations, but
>| generally we choose axioms which are simple and obvious and prove

>| theorems from them
>
>Purely from a pedagogical standpoint, you will find partial support in
>Weinberg's book on QFT. He develops Fock space first and defines the
>annihilation and creation operators (for momentum eigenstates), and then uses
>these definitions to derive the commutation relations. He notes that this is
>the reverse of the usual order. In the preface he gives his reasons:
>
> [The traditional approach] is certainly a way of getting rapidly
> into the subject, but it seems to me that it leaves the reflective
> reader with too many unanswered questions. Why should we
> believe in the rules of canonical quantization or path integration?
> Why should we adopt the simple field equation and Lagrangians
> that are found in the literature? For that matter, why have fields
> at all?
>

Absolutely. It is good to hear one's objections raised by established
figures.

Charles Francis

unread,
Apr 6, 2000, 3:00:00 AM4/6/00
to
In article <8c4634$c21$1...@bob.news.rcn.net>, thus spake Michael Weiss
<mic...@spamfree.net>
>

>I don't know if my limited knowledge of QFT entitles me to an opinion, but
>from where I sit now, I'd be inclined to disagree. Perhaps when I understand
>interacting theories better, I will rue the time I spent gaining intuition
>about Fock space and the free field case. But I doubt it.
>
It depends on what you want to use field theory for. In general field
theories describe quasi-particles which themselves consist of (semi)
stable subsystems, and are not properly described by basis states in
Fock space. But qed can be built from a Fock space of bare electrons and
photons. Then operators on Fock space used to describe interactions.

Walter Kunhardt

unread,
Apr 7, 2000, 3:00:00 AM4/7/00
to
On 5 Apr 2000, John Baez wrote:

> Walter Kunhardt <kunh...@Theorie.Physik.UNI-Goettingen.DE> wrote:
>
> >Why is L^2(R) the symmetric Fock space over C ?
>
> Because God made it so!
>

Oh, I see, that's why.

> ...


> >So where's the N-grading in L^2(R)?
>
> As Toby noted it comes from the eigenvalues of the harmonic
> oscillator Hamiltonian.

It seems strange to me that one needs to bring a specific
physical system into the game. Maybe it's not all that bad
in this case, since this particular Hamiltonian is quite a
canonical one, but still...


> In fact, if we start with any real
> Hilbert space H, and let CH be its complexification, there is
> a natural isomorphism between L^2(H) and the Fock space over
> CH. Here L^2(H) can be defined using Lebesgue measure if H
> is finite-dimensional, and using fancier tricks otherwise.

Ah, this explanation is ever better than the first one!!

>
> Note that by saying there's a *natural* isomorphism I'm asserting
> much more than just the existence of an isomorphism - which, as
> you noted, wouldn't be very exciting.
>
> The guy who worked this out in excruciating detail was Irving
> Segal - check out the book "An Introduction to Algebraic and
> Constructive Quantum Field Theory". There's a big section
> on the isomorphism between the "particle representation" (or
> Fock representation) and the "real wave representation" (or
> Schroedinger representation) of the canonical commutation
> relations.
>

I checked this book in the meantime, but I didn't find the
answer to my troubles right away, so let me be more specific
about what I don't understand here:

The isomorphism between l^2(N_0) = [Fock space over C]
and L^2(R) will be such that it maps the n-th unit vector
(0,...,0,1,0,0,....) to the the n-th eigenfunction of the
harmonic oscillator, i.e. to u |--> e^{-u^2/2} H_n(u),
where H_n is the n-th Hermite polynomial, _UP_TO_
normalisation. Now what I'm wondering about is how to choose
the right phase of that normalisation factor. In particular,
it's not clear to me whether the "R" in L^2(R) is position
or momentum space (of our oscillator); this would amount to
a factor i^n in that phase.

Maybe your explanation starting with the real Hilbert space
H gives a hint: in this case, the Fock space over H has
one more piece of structure which "usual" Fock spaces don't
have, viz. the involution induced by
v+iw |--> v-iw , v,w \in H .

My feeling is that the isomorphism between l^2(N_0) and
L^2(R) is secretly required to be such that this involution
corresponds [in L^2(R)] to pointwise complex conjugation
(or something of ths kind...).


Is this right, or am I still missing something?


Regards,

Walter Kunhardt.


Michael Weiss

unread,
Apr 7, 2000, 3:00:00 AM4/7/00
to

Charles Francis:

|Of course, from a purely mathematical point of view, it makes no

|difference to the idea of a vector space what basis you use. [snip]


|But if you use a finite dimensional representation momentum and position
|are not symmetrical, and a choice has to be made.

The "not" in that last sentence is a typo, yes? (Again, emphasizing that we
are considering a finite dimensional HIlbert space for the moment.) Do you
have in mind the space of state-vectors for a finite lattice? The finite
Fourier transform enables us to pass back and forth between position
eigenvectors and momentum eigenvectors without a hiccup, for this case.

|The lack of symmetry is shown in the fact that while position is
|discrete, momentum space is the unit circle. Then momentum states span

|the Hilbert space, but their cardinality is infinite, so they are not a
|basis.

Here I gather you have turned your attention to the space of state-vectors
for an infinite lattice. Let's stick to one-dimension, to keep notation
simple. So the Hilbert space of state-vectors is L^2(integers). The
position eigenvectors are {delta_n | n an integer}, where delta is the
Kronecker delta, not the Dirac delta. These do form a basis, even though
their cardinality is infinite.

The momentum "eigenvectors" are {exp(ikn) | 0 <= k < 2pi}. These don't form
a basis, but that's not their only problem. They aren't even elements of
L^2(integers), because they are not square-summable. (In physicists'
jargon, they have infinite norms.) Hence the scare-quotes.

So OK, I agree, if we model space as an infinite lattice, then position
eigenvectors are a bit nicer than momentum eigenvectors. If we model space
as R (or R^3, in three dimensions) then the position "eigenvectors" turn
into Dirac delta "functions", and position and momentum are on the same
footing again.

I don't get too upset over this. After all, the theory of distributions was
invented just to give precise mathematical meanings to the Dirac delta
function and its cousins.

I gather from your post "QED as a theory of particles", that you prefer to
stick with finite dimensional Hilbert spaces until the very end, when you
take your limits. This makes me wonder why you regard position eigenstates
as "more real" than momentum eigenstates. Of course the answer might lie in
the details of how you take your limits.

Toby Bartels

unread,
Apr 10, 2000, 3:00:00 AM4/10/00
to
Walter Kunhardt <kunh...@Theorie.Physik.UNI-Goettingen.DE> wrote:

>John Baez <ba...@galaxy.ucr.edu> wrote:

>>Walter Kunhardt <kunh...@Theorie.Physik.UNI-Goettingen.DE> wrote:

>>>So where's the N-grading in L^2(R)?

>>As Toby noted it comes from the eigenvalues of the harmonic
>>oscillator Hamiltonian.

>It seems strange to me that one needs to bring a specific
>physical system into the game. Maybe it's not all that bad
>in this case, since this particular Hamiltonian is quite a
>canonical one, but still...

I don't think we're really bringing a physical system in;
it's just that saying "harmonic oscillator Hamiltonian"
is a nice, familiar way to refer to a particular well known operator.

>>In fact, if we start with any real
>>Hilbert space H, and let CH be its complexification, there is
>>a natural isomorphism between L^2(H) and the Fock space over
>>CH. Here L^2(H) can be defined using Lebesgue measure if H
>>is finite-dimensional, and using fancier tricks otherwise.

>Ah, this explanation is ever better than the first one!!

I have a disturbing feeling that the fancier tricks
might amount to defining L^2(H) as F(C(H)).
But maybe the fancier tricks also define L^p(H),
in which case they couldn't be anything so trivial.
At any rate, the theorem for Lebesgue measure on finiteD spaces
is all we need here.

>The isomorphism between l^2(N_0) = [Fock space over C]
>and L^2(R) will be such that it maps the n-th unit vector
>(0,...,0,1,0,0,....) to the the n-th eigenfunction of the
>harmonic oscillator, i.e. to u |--> e^{-u^2/2} H_n(u),
>where H_n is the n-th Hermite polynomial, _UP_TO_
>normalisation. Now what I'm wondering about is how to choose
>the right phase of that normalisation factor. In particular,
>it's not clear to me whether the "R" in L^2(R) is position
>or momentum space (of our oscillator); this would amount to
>a factor i^n in that phase.

The obvious choice is that the normalisation factor be a positive real;
but I guess you want to justify that.

>Maybe your explanation starting with the real Hilbert space
>H gives a hint: in this case, the Fock space over H has
>one more piece of structure which "usual" Fock spaces don't
>have, viz. the involution induced by
> v+iw |--> v-iw , v,w \in H .

>My feeling is that the isomorphism between l^2(N_0) and
>L^2(R) is secretly required to be such that this involution
>corresponds [in L^2(R)] to pointwise complex conjugation
>(or something of ths kind...).

Not that I'm familiar with what John was talking about,
but I'd be willing to bet that this is just right.
This means that you send the nth unit vector of l^2
to a real valued function, which is exactly what happens
if you make the normalisation factor what I suggested above.

>Is this right, or am I still missing something?


-- Toby
to...@ugcs.caltech.edu


John Baez

unread,
Apr 11, 2000, 3:00:00 AM4/11/00
to
In article <8cr40a$e...@gap.cco.caltech.edu>,
Toby Bartels <to...@ugcs.caltech.edu> wrote:

>Walter Kunhardt <kunh...@Theorie.Physik.UNI-Goettingen.DE> wrote:

>>John Baez <ba...@galaxy.ucr.edu> wrote:

>>>In fact, if we start with any real
>>>Hilbert space H, and let CH be its complexification, there is
>>>a natural isomorphism between L^2(H) and the Fock space over
>>>CH. Here L^2(H) can be defined using Lebesgue measure if H
>>>is finite-dimensional, and using fancier tricks otherwise.

>>Ah, this explanation is ever better than the first one!!

>I have a disturbing feeling that the fancier tricks
>might amount to defining L^2(H) as F(C(H)).

In mathematics, everything amounts to everything else
if you're willing to put enough work into getting from
here to there. But....

>But maybe the fancier tricks also define L^p(H),
>in which case they couldn't be anything so trivial.

Yes, these fancier tricks do allow us to define L^p spaces
as well, so it's not a *trivial* fact that L^2(H) is isomorphic
to the Fock space of the complexification of the real Hilbert
space H - which you're calling F(C(H)).

Basically, the idea is this: to generalize lots of stuff about
L^2(R^n) and physics to the infinite-dimensional case, you
shouldn't try to use Lebesgue measure. You should use a Gaussian
measure!

While the Lebesgue measure dx on an infinite-dimensional real Hilbert
space H is hopelessly ill-defined, the Gaussian measure exp(-|x|^2) dx
is not hopelessly ill-defined. It's not a finite Borel measure - you
can't use it to integrate all bounded continuous functions on H.
It's something a bit spicier - some people call it a "promeasure" or
"cylinder measure". It allows you to integrate bounded continuous
functions that depend only on finitely many coordinates on H!
Starting from these functions we can define L^2(H) and prove that
it's naturally isomorphic to F(C(H)).

Here by "natural" I mean that the orthogonal group O(H) has unitary
representations on L^2(H) and F(C(H)), and that the isomorphism
between these spaces is actually a unitary equivalence of
representations of O(H).

All this stuff is in "An Introduction to Algebraic and Constructive
Quantum Field Theory" by Segal, Zhou and myself. I regret to say that
Segal did not let use the term "promeasure" or "cylinder measure" -
he insisted on the term "distribution". This is terrible, because
"distribution" means at least 3 other important things in mathematics.

Oh well.


Charles Francis

unread,
Apr 11, 2000, 3:00:00 AM4/11/00
to
In article <8cj7g9$6s8$1...@bob.news.rcn.net>, thus spake Michael Weiss
<mic...@spamfree.net>
>

>Charles Francis:
>
>|Of course, from a purely mathematical point of view, it makes no
>|difference to the idea of a vector space what basis you use. [snip]
>|But if you use a finite dimensional representation momentum and position
>|are not symmetrical, and a choice has to be made.
>
>The "not" in that last sentence is a typo, yes?

I'm sure I do more than my quota of typos, but not this time. Sticking
with one dimension we have for the basis in position space

{-n,1-n,2-n,... ,n-2,n-1,n}

Then momentum eigenvectors are defined for p in the unit circle, which
is a continuum. They do span the Hilbert space, but they are not a basis
because they have infinite cardinality in a finite dimensional space.

On the other hand we can switch backwards and forward between position
and momentum space without a hiccough, and we have a well defined inner
product for both momentum states and position states

<x|y> = kronecker delta_x_y

and

<p|q> = 1/2pi sum_-n,n e^-ix(p-q) = Dirac delta(p-q)


This Dirac delta is a continuous, well defined, function of p and q.
Even though I am using a finite dimensional space and a well defined
function, it is reasonable to call <p|q> a Dirac delta because it obeys

<q|f> = integral_-pi,pi dp <q|p> <p|f>

as is straightforward to show. (We can also take a non-standard infinite
integer for n, in which case <q|p> is a non-standard representation of a
Dirac delta, but it is difficult to take a limit and replace it with a
distribution without bringing in square summable norms and losing a
vital part of state space).


<snip, I hope this clarifies>


>
>I gather from your post "QED as a theory of particles", that you prefer to
>stick with finite dimensional Hilbert spaces until the very end, when you
>take your limits. This makes me wonder why you regard position eigenstates
>as "more real" than momentum eigenstates. Of course the answer might lie in
>the details of how you take your limits.
>

To an extent it does. But although I have a lack of symmetry between
momentum space and position space, at least superficially one could
switch the analysis, and deal with a finite discrete set of momentum
states, and position states in the unit circle. However that does appear
to lead to other difficulties. For example in this model the propagator
can easily be seen to be modified from the standard (divergent)
propagator by the subtraction of a kronecker delta, which renders it
finite in the limit n->infinity as discrete time->0, and has the
physical meaning that no particle can interact twice at the same instant
of its life-time. I dare say the same statement could be made in a model
with a discrete basis for momentum, but it would certainly be more
awkward to express.

Secondly, if one has an eye to unification with gtr the definition of qm
on finite regions of space has distinct advantages.

Finally, and perhaps most to the point, there is something fundamental
about position in the way in which we analyse and interpret the world. A
position measurement is about as primitive as measurement gets, either
in terms of the simplicity of measurement, or in terms of the fact that
we measure the position of everything we see just by looking at it. On
the other hand to measure momentum I have to carry out a series of
position measurements over time, so it is natural that momentum states
should be thought of as a linear combination of more fundamental states
in Hilbert space.

John Baez

unread,
Apr 12, 2000, 3:00:00 AM4/12/00
to
Walter Kunhardt <kunh...@Theorie.Physik.UNI-Goettingen.DE> wrote:

>Is this right, or am I still missing something?

There are lots of ways to think about this stuff - that's why
hundreds of pages of "Introduction to Algebraic and Constructive
Quantum Field Theory" are about free fields - but here's one.

Let H be a finite-dimensional real Hilbert space, which we think
of as the "configuration space" for a classical system. Let CH be its
complexification, which is the "phase space" of our system.

Let L^2(H) be the complex Hilbert space of square-integrable complex
functions on H, which we think of as the "Schrodinger representation"
for the corresponding quantum system. Let F(C(H)) be the Fock space
over CH, which we think of as the "Fock representation" for the same
quantum system.

We want to construct an isomorphism between L^2(H) and F(C(H)).

First we construct a representation of the canonical commutation
relations on both these spaces. Subtleties of analysis aside, we
do this in the obvious way. On L^2(H) we have q's and p's given
by multiplication and differentiation operators. F(C(H)) is a
completion of the polynomial algebra over C(H), so we have creation
operators which come from multiplying polynomials by linear functions,
and annihilation operators that come from differentiating polynomials.
Starting from these creation and annihilation operators we define q's
and p's by the usual formulae.

It's not too hard to prove that both these spaces are irreducible
representations of the canonical commutation relations, so the
Stone-von Neumann uniqueness theorem says these representations
are unitarily equivalent, and Schur's lemma says the unitary
equivalence

U: F(C(H)) -> L^2(H)

is unique up to a phase.

How do we pick the phase? I think this is part of what you're
wondering about. Well, the space F(C(H)) has a distinguished
vector in it, namely the polynomial 1. This gets mapped by U
to some phase times the Gaussian function

exp( -|x|^2 / 2)

on H. Let's pick U so that this phase is 1.

As you note, there is a god-given antiunitary operator K on L^2(H)
given by pointwise complex conjugation of our functions on H. If
we pick U as above, we can pull back this operator K to get an
antiunitary operator K' on F(C(H)). What is this operator?

I've never thought about this question before, but since everything
about the constructions so far has been god-given, K' must be some
god-given antiunitary operator on F(C(H)). What could this be?

(Here "god-given" is a technical term meaning "invariant under all
the symmetries of the data we started with". Since we started with
just a real Hilbert space H, this means invariant under all orthogonal
transformations of H. Often people say "canonical" instead of "god-
given", but I like to wear my religion on my sleeve. God made functors
and said they were good.)

Well, F(C(H)) can be thought of as all *complex* polynomials on the
*real* Hilbert space H. There is an obvious notion of "complex
conjugation" of such a polynomial. So I would guess that K' is
this complex conjugation.

And indeed we can prove this by noting that U maps the *real* polynomials
in F(C(H)) - i.e. those invariant under K' - to the Hermite functions on
H, which are real-valued - i.e. invariant under K.

Nice!

By the way, all this stuff generalizes nicely to the case when H
is infinite-dimensional. We have to use a Gaussian measure instead
of Lebesgue measure to define L^2(H) in this case, and the Stone-
von Neumann uniquess theorem doesn't apply directly to infinite
dimensions, so we have to look at what happens on all finite-dimensional
subspaces, but it's really not much harder.

Thanks for making me think about complex conjugation in this context.
It is a good thing.

John Baez

unread,
Apr 12, 2000, 3:00:00 AM4/12/00
to
In article <f9hOZMA6...@clef.demon.co.uk>,
Charles Francis <cha...@clef.demon.co.uk> wrote:

>Sticking with one dimension we have for the basis in position space
>
> {-n,1-n,2-n,... ,n-2,n-1,n}
>
>Then momentum eigenvectors are defined for p in the unit circle, which
>is a continuum. They do span the Hilbert space, but they are not a basis
>because they have infinite cardinality in a finite dimensional space.

You haven't said how you are defining your "momentum operator"
in this situation, so I can't compute its spectrum. However,
I know one thing: if you have a linear operator on a finite-
dimensional vector space, it has a finite set of eigenvalues.
The reason is that the eigenvalues of T must be solutions of
the characteristic equation det(T - xI) = 0, which is a polynomial
equation and thus has finitely many solutions. So it can't
possibly be true that every point in the unit circle is an
eigenvector of your momentum operator.

(Note: I'm not *sure* you're actually claiming here the every number
p in the unit circle is an eigenvector of your momentum operator, but
it sure *seems* like you're claiming this. A while ago you asked
why I don't understand your theories, so I've decided to spend a
little more time than before pointing out remarks of yours that
seem false to me, as a kind of explanation of why I have trouble
understanding you. This is one.)

Let me explain Michael Weiss' remark about the perfect symmetry
between position and momentum space when we take position space
to be the integers mod n, also known as Z/n.

The Hilbert space L^2(Z/n) has an interesting unitary operator
on it, namely the translation operator

Uf(x) = f(x + 1)

The spectrum of this operator consists of all nth roots of unity.
These numbers form a group isomorphic to Z/n. When we take
functions on Z/n and decompose them in terms of eigenvectors of U,
we thus get a new way of thinking of them as functions on Z/n.
We thus have an operator

F: L^2(Z/n) -> L^2(Z/n)

This operator is analogous to the Fourier transform.

There is thus a perfect symmetry between position space and
momentum space in this situation!

This stuff generalizes from Z/n to any finite abelian group and
is called Pontrjagin duality. In physspeak, if we take any
finite abelian group as our "position space", the same group
will be our "momentum space".

Toby Bartels

unread,
Apr 13, 2000, 3:00:00 AM4/13/00
to
John Baez <ba...@galaxy.ucr.edu> wrote at last:

>Starting from these functions we can define L^2(H) and prove that
>it's naturally isomorphic to F(C(H)).

>Here by "natural" I mean that the orthogonal group O(H) has unitary
>representations on L^2(H) and F(C(H)), and that the isomorphism
>between these spaces is actually a unitary equivalence of
>representations of O(H).

So, by "natural", you *don't* mean that L^2 and F.C are functors
joined by an invertible 2functor (natural transformation)?
Or is the above paragraph actually a secret description of said 2functor?
I should make this a puzzle for me, but I'm busy trying to define
C*coalgebras right now.

>All this stuff is in "An Introduction to Algebraic and Constructive
>Quantum Field Theory" by Segal, Zhou and myself. I regret to say that
>Segal did not let use the term "promeasure" or "cylinder measure" -
>he insisted on the term "distribution". This is terrible, because
>"distribution" means at least 3 other important things in mathematics.

The distributions I'm most familiar with, the generalised functions
like Dirac's delta, are themselves measures, at least.
Perhaps he should have said "cylinder distribution" to be precise?


-- Toby
to...@ugcs.caltech.edu


John Baez

unread,
Apr 14, 2000, 3:00:00 AM4/14/00
to
In article <8d11vd$m...@gap.cco.caltech.edu>,
Toby Bartels <to...@ugcs.caltech.edu> wrote:

>John Baez <ba...@galaxy.ucr.edu> wrote at last:

>>Starting from these functions we can define L^2(H) and prove that
>>it's naturally isomorphic to F(C(H)).

>>Here by "natural" I mean that the orthogonal group O(H) has unitary
>>representations on L^2(H) and F(C(H)), and that the isomorphism
>>between these spaces is actually a unitary equivalence of
>>representations of O(H).

>So, by "natural", you *don't* mean that L^2 and F.C are functors
>joined by an invertible 2functor (natural transformation)?

First: Please don't use "2-functor" to mean "natural transformation",
because lots of people already use "2-functor" to mean the kind of
thing that goes from one 2-category to another.

If you want, you can call a functor a "morphism" and call a natural
transformation a "2-morphism" (in the 2-category of categories) -
that may confuse some people, but not me!

Second: yes, what I was secretly saying was that L^2 and F.C are
natural isomorphic functors. For example: they are functors from
the category of

[real Hilbert spaces and orthogonal operators]

to the category of

[complex Hilbert space and unitary operators]

with the natural isomorphism being the map I described.

>>All this stuff is in "An Introduction to Algebraic and Constructive
>>Quantum Field Theory" by Segal, Zhou and myself. I regret to say that
>>Segal did not let use the term "promeasure" or "cylinder measure" -
>>he insisted on the term "distribution". This is terrible, because
>>"distribution" means at least 3 other important things in mathematics.

Namely:

1) continuous linear functionals on the space of smooth functions
2) vector subbundles of the tangent bundle
3) measures on the real line - aka "probability distributions"

All quite different! The last thing we need is a 4th meaning.

>The distributions I'm most familiar with, the generalised functions
>like Dirac's delta, are themselves measures, at least.

I presume you're talking about definition 1) above. Well, there are
lots of "distributions" in this sense that aren't measures, like
the derivatives of Dirac's delta.

>Perhaps he should have said "cylinder distribution" to be precise?

Urk. No. He shoulda used something like "promeasure" or
that indicates "not quite a measure".

Segal used the term "distribution" as some kind of take-off on
definition 3), but I don't think it was a wise idea. (I can
say this now that he is no longer alive, without him biting
my head off.)

..............................................................

Btw, should I give away everything I know about that darned
"noncommutative unit disc" puzzle? I'm sick of having that
little post-it note in my brain saying "someday you gotta
finish talking about that puzzle". It's occupying some much-
needed neurons!

cartoaje

unread,
Apr 14, 2000, 3:00:00 AM4/14/00
to

also sprach John Baez,

>These numbers form a group isomorphic to Z/n. When we take
>functions on Z/n and decompose them in terms of eigenvectors of
U,

I don't understand how you decompose functions in terms of
eigenvectors of U. The way I see it, if we take n = 5 and the
image of the eigenvectors to be the 5th roots of 1, then we have
25 eigenvectors. How do we decompose a function on that?

>This stuff generalizes from Z/n to any finite abelian group and
>is called Pontrjagin duality. In physspeak, if we take any
>finite abelian group as our "position space", the same group
>will be our "momentum space".

Sounds neat! Any references?

Mihai
mcar...@mat.ulaval.ca

* Sent from RemarQ http://www.remarq.com The Internet's Discussion Network *
The fastest and easiest way to search and participate in Usenet - Free!


Toby Bartels

unread,
Apr 17, 2000, 3:00:00 AM4/17/00
to
John Baez <ba...@galaxy.ucr.edu> wrote:

>Toby Bartels <to...@ugcs.caltech.edu> wrote:

>>So, by "natural", you *don't* mean that L^2 and F.C are functors
>>joined by an invertible 2functor (natural transformation)?

>First: Please don't use "2-functor" to mean "natural transformation",
>because lots of people already use "2-functor" to mean the kind of
>thing that goes from one 2-category to another.

Well, that's ugly. Why can't they just call it "functor"?
I figured "nfunctor" could be the term for an nmorphism
is the wcategory of wcategories (where "w" means omega).
Since the 2category of categories is a subcategory of this,
then this terminology can be used there as well.
OTOH, if I say "2morphism", I still have to specify the 2category!
(But I guess I wouldn't have to in this case,
since the use of "functors" had already done that.)

Re: "distribution"

>1) continuous linear functionals on the space of smooth functions
>2) vector subbundles of the tangent bundle
>3) measures on the real line - aka "probability distributions"
>All quite different! The last thing we need is a 4th meaning.

Like promeasures, (1) and (3) can integrate certain things.
That's a rather trivial statement, when you get down to it,
but the point is that the use of the term "distribution"
is an invitation to look at it from that POV.

>>The distributions I'm most familiar with, the generalised functions
>>like Dirac's delta, are themselves measures, at least.

>I presume you're talking about definition 1) above. Well, there are
>lots of "distributions" in this sense that aren't measures, like
>the derivatives of Dirac's delta.

Right, the functional may not be defined on a characteristic function,
since such a function is usually not smooth.

>>Perhaps he should have said "cylinder distribution" to be precise?

>Urk. No. He shoulda used something like "promeasure" or
>that indicates "not quite a measure".

I'm not sure why "pro" indicates "not quite a ". ("pre" I'd understand.)
But if the problem with "distribution" is that it's overloaded,
we can save its benefit -- an invitation to use a certain POV --
by giving it an adjective.

>Segal used the term "distribution" as some kind of take-off on
>definition 3), but I don't think it was a wise idea. (I can
>say this now that he is no longer alive, without him biting
>my head off.)

So, he used it to recall that point of view, yes?

>Btw, should I give away everything I know about that darned
>"noncommutative unit disc" puzzle? I'm sick of having that
>little post-it note in my brain saying "someday you gotta
>finish talking about that puzzle". It's occupying some much-
>needed neurons!

I mean to get around to finishing my thoughts on it,
but I'm working on other things -- when I get any time at all.
I hate to say it, but perhaps you should get it over with.


-- Toby
to...@ugcs.caltech.edu


John Baez

unread,
Apr 18, 2000, 3:00:00 AM4/18/00
to
In article <jbrhwaAL...@clef.demon.co.uk>,
Charles Francis <cha...@clef.demon.co.uk> wrote:
>One problem with commercial programming is that you only get paid for
>using a DFT, not for knowing how it works.

As opposed to certain academics who know how it works but not what
it's good for. The complementarity principle at work!

>You don't seem to mention any windows, so what happened to the
>Gibbs' phenomenon?

I'm sure it's lurking around here somewhere, but I was talking
about much simpler things.

>Oh, you do mention L^2. What does L^2 mean on a finite dimensional space?

I presume you mean "on a finite set", since I was talking about
L^2(Z/n), where Z/n is a finite set.

L^2(X) always means the same thing - the Hilbert space of square-
integrable functions on the measure space X. So the only thing
I need to tell you is how to make a finite set into a measure
space - or in other words, how to integrate functions defined on
the finite set X. There are lots of ways to do it, but the default
one is to just *sum the values* of the function in question. In
other words, take the sum of f(x) as x ranges over the finite set
X. That's the nice thing about modern measure theory - it
automatically subsumes the theory of sums. Anyway, this means
that people define L^2(Z/n) to be the Hilbert space of all
complex-valued functions on Z/n, with the inner product
_
<f,g> = sum_{x in Z/n} f(x) g(x)

John Baez

unread,
Apr 18, 2000, 3:00:00 AM4/18/00
to
In article <26f169fc...@usw-ex0102-014.remarq.com>,
cartoaje <mcartoaj...@mat.ulaval.ca.invalid> wrote:

>also sprach John Baez,

>>These numbers form a group isomorphic to Z/n. When we take
>>functions on Z/n and decompose them in terms of eigenvectors of

>U [....]

>I don't understand how you decompose functions in terms of
>eigenvectors of U. The way I see it, if we take n = 5 and the
>image of the eigenvectors to be the 5th roots of 1, then we have
>25 eigenvectors. How do we decompose a function on that?

I'm not sure where that number 25 came from! The space of
complex functions on Z/5 is 5-dimensional. The unitary operator
U given by

(Uf)(x) = f(x+1)

has 5 eigenvalues - the 5th roots of unity. The corresponding
eigenvectors f_1, ..., f_5 form a basis of the functions on Z/5.
These functions f_i are a discretized version of the usual "momentum
eigenstates" we all know and love from quantum mechanics. In fact,
the whole apparatus of Fourier transforms generalizes painlessly
to this situation. One doesn't even need integrals - just finite
sums!

>>This stuff generalizes from Z/n to any finite abelian group and
>>is called Pontrjagin duality. In physspeak, if we take any
>>finite abelian group as our "position space", the same group
>>will be our "momentum space".

>Sounds neat! Any references?

Hmm. There are some nice books with "Pontrjagin" or "Pontryagin"
in the title, but I forget the details. You might try George
Mackey's innumerable books on harmonic analysis and quantum mechanics.


Charles Francis

unread,
Apr 19, 2000, 3:00:00 AM4/19/00
to

In article <8dg2km$g0n$1...@pravda.ucr.edu>, thus spake John Baez
<ba...@galaxy.ucr.edu>

>In article <jbrhwaAL...@clef.demon.co.uk>,
>Charles Francis <cha...@clef.demon.co.uk> wrote:
>>One problem with commercial programming is that you only get paid for
>>using a DFT, not for knowing how it works.
>
>As opposed to certain academics who know how it works but not what
>it's good for. The complementarity principle at work!
>
>>You don't seem to mention any windows, so what happened to the
>>Gibbs' phenomenon?
>
>I'm sure it's lurking around here somewhere, but I was talking
>about much simpler things.
>
>>Oh, you do mention L^2.

[snip, definition, thanks]

That's what I might have expected, but it doesn't answer the question
about the Gibb's phenomenon. It may not be that simple thinking about
how it happens, but on the face of it, it refutes the claim of switching
between position and momentum space without hiccough, or alternatively
it shows the analogy between your operator and the Fourier transform
breaks down.

Perhaps it is the latter. Maybe the Gibb's phenomenon is not lurking
here at all, but perhaps something else goes wrong instead - otherwise
this bit of theory would be far too important not to take up a large
chunk of books on signal processing. But as I understand it, however
flashy and clever the butterfly algorithms in it, the DFT is still a
kludgy discretisation (to use your phrase) of the FT. What you have
suggested sounds like an elegant alternative. But we don't seem to use
it. Why not? Is it just that this bit of maths never filtered through to
computer programmers? (it happens)

It seems pretty obvious that we can switch to and fro position and
momentum space without a hitch, as we just have finite linear
combinations. So it must be worth knowing a bit more about how this
corresponds to the ordinary FT. You have generated an operator from a
discrete translation, and I believe the Fourier transform can be
generated in a similar way from the corresponding continuous
translation. So it appears that the two must be close. That suggests to
me that we would have a similar calculation, but a slightly different
set of coefficients. Chances are the coefficients will be trigonometric,
but it shouldn't really matter if they are not, as they would be
calculated in advance and placed in a look-up table. So what does go
wrong? Why don't we use this instead of the DFT?

Oh, and you did not comment on the lack of symmetry between position and
momentum space the way I do things, or on the explanation that momentum
states are eigenstates in the embedding space. Is anything starting to
make sense?

John Baez

unread,
Apr 20, 2000, 3:00:00 AM4/20/00
to
In article <8d8c9r$k56$1...@pravda.ucr.edu>,
John Baez <ba...@galaxy.ucr.edu> wrote:

>In article <26f169fc...@usw-ex0102-014.remarq.com>,
>cartoaje <mcartoaj...@mat.ulaval.ca.invalid> wrote:

>>Sounds neat! Any references?

Now that I'm in my office I can dig them up more easily....

>Hmm. There are some nice books with "Pontrjagin" or "Pontryagin"
>in the title, but I forget the details.

Maybe this is the one I was thinking about - I don't know:

Classifications of Abelian groups and Pontrjagin duality / Peter Loth.
Amsterdam: Gordon and Breach, c1998.

>You might try George
>Mackey's innumerable books on harmonic analysis and quantum mechanics.

Like these:

George Mackey, Quantum Mechanics from the Point of View of the
Theory of Group Representations, Mathematical Sciences Research
Institute, 1984.

George Mackey, Unitary Group Representations in Physics, Probability,
and Number Theory, Addison-Wesley, 1989.


John Baez

unread,
Apr 25, 2000, 3:00:00 AM4/25/00
to
In article <8dl96l$rdo$1...@Urvile.MSUS.EDU>,
Charles Francis <cha...@clef.demon.co.uk> wrote:

>That's what I might have expected, but it doesn't answer the question

>about the Gibbs phenomenon.

Now that I think about it, I bet I see what happens with the
Fourier transform I was discussing:

Take a simple step function on Z/n, like this for example:

f(x) = 0 x = 1, 2, ..., n/2
= 1 x = n/2+1, ..., n

(Suppose n is even to keep life simple.) If you take its
Fourier transform and then take the inverse Fourier transform
you get back exactly the function you started with - none of
those nasty Gibbs ripples near x = n/2.

However, if you take the Fourier transform, then introduce
a little bit of "noise" by adding a slowly varying function to
the Fourier transform, and then Fourier transform back, I
bet you sometimes get those Gibbs ripples. Why? The inverse
Fourier transform of a slowly varying function is a sharply
peaked function. I don't see why the peak should be centered
near x = n/2.....

... but I bet that some reasonable type of noise will cause
the peak to be located around there. And maybe this "reasonable
type of noise" can easily occur due to rounding errors in a
numerical calculation.

But I'm just guessing about that last paragraph.

>Perhaps it is the latter. Maybe the Gibb's phenomenon is not lurking
>here at all, but perhaps something else goes wrong instead - otherwise
>this bit of theory would be far too important not to take up a large
>chunk of books on signal processing. But as I understand it, however
>flashy and clever the butterfly algorithms in it, the DFT is still a
>kludgy discretisation (to use your phrase) of the FT. What you have
>suggested sounds like an elegant alternative. But we don't seem to use
>it. Why not? Is it just that this bit of maths never filtered through to
>computer programmers? (it happens)

I don't know!

>It seems pretty obvious that we can switch to and fro position and
>momentum space without a hitch, as we just have finite linear
>combinations. So it must be worth knowing a bit more about how this
>corresponds to the ordinary FT. You have generated an operator from a
>discrete translation, and I believe the Fourier transform can be
>generated in a similar way from the corresponding continuous
>translation. So it appears that the two must be close.

The theory of the ordinary Fourier transform is very, very similar
to the theory of the discrete Fourier transform I mentioned. In fact,
mathematicians regard them as two special cases of Pontrjagin duality.

>Oh, and you did not comment on the lack of symmetry between position and
>momentum space the way I do things, or on the explanation that momentum
>states are eigenstates in the embedding space. Is anything starting to
>make sense?

Not really.

Ralph Hartley

unread,
Apr 27, 2000, 3:00:00 AM4/27/00
to
John Baez wrote:

> Take a simple step function on Z/n, like this for example:
>
> f(x) = 0 x = 1, 2, ..., n/2
> = 1 x = n/2+1, ..., n
>
> (Suppose n is even to keep life simple.) If you take its
> Fourier transform and then take the inverse Fourier transform
> you get back exactly the function you started with - none of
> those nasty Gibbs ripples near x = n/2.
>
> However, if you take the Fourier transform, then introduce
> a little bit of "noise" by adding a slowly varying function to
> the Fourier transform, and then Fourier transform back, I
> bet you sometimes get those Gibbs ripples. Why? The inverse
> Fourier transform of a slowly varying function is a sharply
> peaked function. I don't see why the peak should be centered
> near x = n/2.....

[...]

This is not what the Gibbs phenomenon is about. Here's the way I remember it (from a past life).

If you just work on the continuous line R or just the discrete Z/n. Everything works fine. The Fourier transform (or dft) and its inverse are exact. The trouble starts when you use the discrete transform as an approximation for a continuous one. At every

You might think that by using a finer grid you could reduce the maximum error to an arbitrarily small value, but this is not the case. The Fourier series converges pointwise, but not uniformly. The problem is that the step function has power at all freque

Ralph Hartley


[Moderator's note: Quoted text trimmed. -MM]

0 new messages