I'm doing postgraduate studies in philosophy, and one of my current
interests is in looking at the status of intrinsic properties. More
specifically, I am interested in (a) whether there are any intrinsic
properties, and (b) if so, which properties are in fact intrinsic. I was
hoping that some of you could give me some help here.
Philosophers are still debating what exactly it is to be an intrinsic
property, but the basic idea is that an intrinsic property is a property
that something has "in and of itself". Thus, for instance, being taller
than Jack would not count as an intrinsic property. Nor would properties
like being a certain distance from another object, and so on.
Part of what got me thinking about the status of intrinsic properties was
the implications that special relativity seems to have for various
properties that philosophers have often thought of as being
paradigmatically intrinsic. These are properties like length, shape, and
mass. But it seems that according to relativity theory, nothing has these
properties in and of themselves. I have a certain shape relative to some
inertial reference frames, but not others. Likewise for mass and length.
I was thinking that, with respect to shape at least, there might be
something intrinsic that can be salvaged. It seems that there is
something that remains invariant across all the reference frames - some
sort of topological or structural features (I am groping for the right
words here, as maths and geometry aren't my strong points). To illustrate
this, if you compress or stretch something uniformly in a certain
direction, it seems that although its shape has changed, there is
something about its topology which remains the same as it was before -
there is a transformation that connects its shape after stretching or
contraction and its shape before. So maybe this frame-invariant
topological feature counts as intrinsic to the object.
Another thing I am curious about is whether light-paths themselves count
as genuine inertial reference frames. If that were so, then it seems that
relative to some light that is travelling towards me, my length would be
zero - I would be (spatially) a two-dimensional object. But then, I was
wondering, if I was squashed down to two dimensions in a certain reference
frame, would all of the toplogical features that I was speaking of in the
last paragraph still be preserved. It seems that they might not be, and
that therefore, if light-paths count as genuine reference frames, then
there might not be any frame-invariant topological features of an object
either. So in that case, these topological features would not count as
intrinsic either.
I am curious to get some input on these thoughts.
I am also wondering if there are any other areas of physics which have any
implications for the questions of whether there are intrinsic properties
or not, and if so, what properties count as intrinsic.
Thanks, your help is appreciated.
Neil.
>I was thinking that, with respect to shape at least, there might be
>something intrinsic that can be salvaged. It seems that there is
>something that remains invariant across all the reference frames - some
>sort of topological or structural features (I am groping for the right
>words here, as maths and geometry aren't my strong points).
It sounds that you are leaning towards defining an "intrinsic
property" to be an *invariant* --- a quantity that transforms
trivially under the symmetry group of the physical theory in question.
Take for example Euclidean geometry, where the symmetry group consists
of translations and rotations in space (and if you feel like it,
reflections). Then the position of a point is not an invariant, since
it changes under translations. However, the distance AB between two
points A and B is an invariant, as is the angle ABC formed by points
A, B, and C. These do not change when we translate or rotate of the
figure in question.
The group of symmetries of Euclidean geometry is called the Euclidean
group. In special relativity the symmetry group is called the
Poincare group. This consists of translations and rotations in space,
translations in time, Lorentz transformations, and combinations
thereof. We could use the mathematics of the Poincare group to work
out what properties of a shape (or more precisely, a region in
spacetime!) are invariant under all such transformations. Certainly
there are a lot of invariant properties. There is a large body of
math designed for studying such questions. Perhaps it's worth
recalling the history (in a very sketchy outline)....
The notion of an "intrinsic" property is, as you note, easy to argue
about, while the notion of an invariant is quite precise and suitable
for mathematics and physics. Ever since the late 1800s there has been
a branch of mathematics called "invariant theory" devoted to the study
of invariants. Also around that time the notion of "group" was
developed for the mathematical study of symmetry, and a bit later
Felix Klein in his "Erlangen program" proposed to study the various
kinds of geometry and the corresponding kinds of invariants by means
of group theory. Throughout the 20th century group theory has been at
the very heart of theoretical physics, and computing invariants
is the livelihood of many a mathematician. There should be a large
literature on the philosophical aspects of group theory lurking out
there somewhere... all that comes to mind immediately is the following
book written by philosophically inclined mathematical physicist:
Hermann Weyl, _Symmetry_
Uhnh? Is this what groups encapsulate, invariants? So relative angles
and distances are invariant in euclidian geometry, OK, OK. You will
perhaps forgive me for considering that it's a rather complicated way of
saying it although I guess you also get the operations that maintain the
invariance. OTOH ther will be another group presumably (one that
includes scaling for example) that still maintains the invariancy of
relative angles, but not of distance.
At first glance it's hard to see the utility of this but given the
effort that has gone into this way of looking at things, I guess it must
have several.
Are we talking about a group of *operations* here, ie reflection,
translation etc? This would make more sense. If so one is somewhat
surprised that such a collection actually forms a group, given the
restrictions on defining a group.
You will easily deduce that I don't have a clue about this.
>The group of symmetries of Euclidean geometry is called the Euclidean
>group. In special relativity the symmetry group is called the
>Poincare group. This consists of translations and rotations in space,
>translations in time, Lorentz transformations, and combinations
>thereof. We could use the mathematics of the Poincare group to work
>out what properties of a shape (or more precisely, a region in
>spacetime!) are invariant under all such transformations. Certainly
>there are a lot of invariant properties. There is a large body of
>math designed for studying such questions. Perhaps it's worth
>recalling the history (in a very sketchy outline)....
Then there appears to be an electromagnetic group. Is it U(1) (whatever
that is)? Is it possible to gain a superficial understanding by looking
at what a couple of simple groups do? Hmmmm, don't we have some
invariants coming from Laplace also, is there a connection?
> There should be a large
>literature on the philosophical aspects of group theory lurking out
>there somewhere... all that comes to mind immediately is the following
>book written by philosophically inclined mathematical physicist:
Hmmmm. Why do I get the feeling that I wouldn't be able to follow said
book even if I had it. However one has the feeling that one could
appreciate the concept without full understanding of the maths with a
little help.
--
'Oz "Is it better to seem ignorant and learn,
- or seem wise and stay ignorant?"
>Uhnh? Is this what groups encapsulate, invariants?
Very loosely speaking, a group is a bunch of "operations" or
"transformations", which something may or may not be invariant under.
So the notion of group is a prerequisite for the mathematical
study of invariants. However, one also uses groups to study *how things
vary*. You may dimly recall that in your general relativity class we were
constantly talking, not only about "invariant" quantities, but also
about "covariant" and "contravariant" tensors and the like.
To really understand all this "variance" stuff, you need group theory.
Since most of physics is about how physical world varies as our viewpoint
changes (e.g. as time passes), group theory lies at the very heart of
physics.
>So relative angles
>and distances are invariant in euclidian geometry, OK, OK. You will
>perhaps forgive me for considering that it's a rather complicated way of
>saying it although I guess you also get the operations that maintain the
>invariance.
You seem to have passed directly from thinking "groups are too profound
for me to ever understand" to thinking "what? Is this all groups are?
What a complicated way of talking about something so simple!" Good!
The first step in learning is to overcome your fear of the subject.
Now there's just one detail left to explain: what a group is.
I'll tell you the definition and you can ponder it and ask me
questions until it makes sense.
A group is a set G together with a binary operation --- let's call it
+ for now --- with a certain list of properties. First I'd better
make sure you know what a "binary operation" is, since I know mathematics
jargon is not your forte. All I mean by this is that if we have two
elements of our group G, say g and h, then we can get a new one called g&h.
[Moderator's note: He means "g+h", not "g&h". -TB]
Okay? So, here are the properties this binary operation must have:
1) Associativity: (g+h)+k = g+(h+k) for all g,h, and k in G.
2) Existence of an identity element: there exists an element of G
called the "identity" and denoted by 0, such that g+0 = g = 0+g.
3) Existence of inverses: for every g in G there exists an element
of G called its "inverse" and denoted by -g, such that (-g)+g =
g+(-g) = 0.
That's all there is to it. If you want to talk about this more, please
save this definition somewhere, since I am never going to type it in
again --- typing in definitions is incredibly dull.
Note that sometimes we call the binary operation x ("multiplication")
instead of + ("addition"), in which we call the identity 1 and the
inverse g^{-1}. The reason for this appalling flexibility of notation
is that the notion of group is appallingly general and covers both
"additive" and "multiplicative" situations.
Check to see if the following are, or are not, examples of groups:
1) The set R (the real numbers) equipped with the binary
operation + (ordinary addition).
2) The set R equipped with the binary operation x (ordinary
multiplication).
3) The set R-{0} (the real numbers excluding zero) equipped with
the binary operation x.
4) The set of translations in the plane, with the binary operation
being "composition": we compose translations g and h by first doing
the translation g and then the translation h, the result being
another translation.
>Are we talking about a group of *operations* here, ie reflection,
>translation etc? This would make more sense.
Yes indeed. We can think of a group G as a bunch of "operations"
and the + as "composition" of operations: the process of doing first
one, then another.
>If so one is somewhat
>surprised that such a collection actually forms a group, given the
>restrictions on defining a group.
Groups are very common; you have only to learn what they are and you
will see them everywhere.
Check using the definition of group above that the collection of
all transformations consisting of a translation of the plane followed
by a rotation form a group.
>You will easily deduce that I don't have a clue about this.
Thanks for saving me the trouble of making some remark along these
lines.
[Note to the audience: flaming is forbidden on s.p.r. and as a
moderator I should set a good example. However, in a previous
tutorial on general relativity, I adopted the persona of a wizard
who was always mocking the bumbling apprentice played by Oz (whom
in real life I treat with exquisite politeness). If I wind up
explaining group theory to Oz, I may at times revert to my old ways.
Hopefully this will be acceptable, even to Oz.]
>Then there appears to be an electromagnetic group. Is it U(1) (whatever
>that is)?
Yes, U(1) rules electromagnetism. A while back I nicely typed
up a list of definitions of a bunch of basic groups and emailed
it to you. The definition of U(1) is in there (actually U(n),
but just take n = 1 and see what it boils down to).
>Is it possible to gain a superficial understanding by looking
>at what a couple of simple groups do?
Yes indeed. It might be good to start with groups consisting
of translations, rotations, reflections, Lorentz boosts and
various combinations thereof.... like the Euclidean group and
Poincare group. (The latter governs special relativity.)
>Hmmmm. Why do I get the feeling that I wouldn't be able to follow said
>book even if I had it.
Perhaps because you haven't actually looked at it. It was written
for people who want to get a taste of symmetry without actually
eating the main course. The above post is infinitely harder to digest.
Hmmmm. Why do I get the feeling that I wouldn't be able to follow said
book even if I had it. However one has the feeling that one could
appreciate the concept without full understanding of the maths with a
little help.
the book in question being Weyl's _Symmetry_.
Not so! Weyl was not only one of the great mathematicians of the
twentieth century, but a man steeped in culture, and a marvelous prose
stylist to boot. _Symmetry_ roams across art, math, science, and
philosophy with aplomb. And it's a light read for all that! IMHO,
one has only to compare _Symmetry_ with Hawking's _Brief History_ or
Penrose's _Emperor_ to see how far science popularization has sunk in
the past few decades.
Incidentally, a generous helping of _Symmetry_ is reproduced in _The
World of Mathematics_, ed. J.R.Newman.
Similarly, events in your brain are different in different coordinate
systems. This implies one of two things. Either consciousness is not what
it naively appears to be (ie, with a well-define state of consciousness at
each instant of subjective time); or consciousness is not determined by
physical reality.
This was a stumbling point for me when I learned about special relativity.
I still think it is interesting (which is why I'm reposting this).
Greg
> Similarly, events in your brain are different in different coordinate
> systems. This implies one of two things. Either consciousness is not what
> it naively appears to be (ie, with a well-define state of consciousness at
> each instant of subjective time); or consciousness is not determined by
> physical reality.
In a sense, this dichotomy appears long before special relativity is
invoked, since the speed of communication between different parts of the
nervous system is so slow-- much slower than the speed of light. I'd say
that most people wouldn't be terribly troubled by the idea that
consciousness takes a nanosecond or so to fully experience an event
(which would be the interval imposed by special relativity), but the
relevant time intervals are actually much longer-- large fractions of a
second.
I've heard of neurological tests that bring this into stark relief. I.e.
people report conscious, planned reactions to a stimulus when
examination of their neural activity seems to indicate a rationalization
after the fact of a more reflexive response. One example of this is
familiar. When the doctor hits your knee with a rubber mallet to test
the patellar reflex, your kick *feels* like an intentional response
integrated with all of your other thoughts (at least, it does to me--
when I was a kid I used to be afraid that I was screwing up the doctor's
test by *willing* my leg to kick out), even though (unless I
misremember) the decision-making actually happened somewhere in your
spinal column before your brain heard about it.
Either the "monad" of subjective experience is something nonlocal,
requiring no physical communication to hold it together, or we perform a
kind of post-hoc revision of our memories to make it appear as if we are
single reasoning entities (a view particularly espoused by Daniel
Dennett). Or maybe it's some combination of the two...
--
Font-o-Meter! Proportional Monospaced
^
Physics, humor, Stanislaw Lem reviews: http://world.std.com/~mmcirvin/
:Similarly, events in your brain are different in different coordinate
:systems.
Might you elabourate?
:This implies one of two things. Either consciousness is not what
:it naively appears to be (ie, with a well-define state of consciousness at
:each instant of subjective time);
It seems that way? Not IMHO; the brain appears to operate on many levels
of consciousness at once. (Perhaps this branch of the thread can move
to sci.physics, where mixing in neurology and psychology (one of my
relatives having once been prominent there) would be more appropriate.)
:or consciousness is not determined by physical reality.
And I'm a vitalist.
[Moderator's note: I have taken the author's suggestion and
set followups to sci.physics. - jb]
Well, yes and no (depending on what you mean by "events" and "different").
Observers in different coordinate systems will definitely assign different
values to the mass of your brain and the lengths of your neurons, but
should agree about things like "this neuron fired after (in the forward
light cone) of this other neuron". In particular, if you are a
functionalist like me (i.e., you believe that what is important about the
brain is the function it computes), you may take refuge in the fact that
observers in different coordinate systems observing the same computer
will agree about the program it is running and the results of any
particular computation.
In short, I agree that consciousness is probably not what it naively
appears to be (does anyone?), but disagree that special relativity
provides any evidence that it is not determined by physical reality.
-Thomas C
----------------------------------------------------
<a href="http://www.mit.edu:8001/people/thomasc/">
Somerville Stories, RICHH Archive, TwentyNothing</a>
>Either the "monad" of subjective experience is something nonlocal,
>requiring no physical communication to hold it together, or we perform a
>kind of post-hoc revision of our memories to make it appear as if we are
>single reasoning entities (a view particularly espoused by Daniel
>Dennett). Or maybe it's some combination of the two...
There should be a nice branch of physics dealing with complex
organized phenomena which are sufficiently spatially extended that the
time for information to pass all the way across them exceeds the
timescale at which interesting events occur.
Clearly information needs to pass all the way across many times before
correlations can arise, unless there are correlations to begin with.
Famous example #1: the "horizon problem" in cosmology - in the standard
big bang model, galaxies sufficiently far from one another have not
had time to be in causal contact, yet the microwave background radiation
is quite homogeneous even on these length scales. Cf:
http://astro.berkeley.edu/~jbaker/IDS160/Solutions/hw8/node6.html
However, once correlations are there, funny things can happen:
different parts of the system can act in concert on time scales too
short for information to get from one to the other - simply because
they are already set up to do so. One can even get apparent
"motion" going faster than the speed at which information can propagate.
Famous example #2: the scissors paradox in special relativity, where
the point of contact of two closing blades of a huge scissors can
race forwards faster than the speed of light. Of course this is not
a paradox if the scissor blades are set into motion all along their length
by a prearranged plan. Cf:
http://math.ucr.edu/home/baez/physics/scissors.html
Famous example #3: the Belousov-Zhabotinsky reaction, in which an
initially homogeneous solution of chemicals begins to exhibit
traveling waves of yellow, moving faster than the chemicals actual
diffuse. It is especially fun to see the cellular automaton
simulations of this reaction, in which the travelling waves clearly
move faster than the rate of information propagation.
Interestingly, Belousov's first paper on this reaction was rejected,
the editor writing that his "supposed discovered discovery" was impossible.
He worked 6 more years to improve the paper, but then the editor insisted
that it be shortened to a letter. He decided to give up on publishing it
and simply circulated it among colleagues; it was published posthumously.
10 years after his death he was awarded the Lenin Prize for his discovery,
which triggered an immense amount of work on spontaneously self-organizing
systems. Cf:
http://puddle.st.usm.edu/NEW/pages/pchem/firstsem/exp2/intro_1.2.html
Even in situations without "seemingly superluminal propagation",
there are interesting puzzles about how extended systems form and
maintain their structure.
Famous example 4: What causes spiral arms of galaxies? One theory,
the "density wave" theory, suggests they form naturally due to
gravitational instabilities in the galactic disk. Another, the
"propagating star formation" model, suggests that formation of big O
and B type stars gives rise to supernovae whose shock waves compress
clouds of cold gas and dust, triggering more star formation. In this
scenario the motion of the spiral arms does not correspond to the
motion of individual stars, but simply waves of star formation. Some
people have simulated this scenario using cellular automata. Cf:
http://dept.physics.upenn.edu/~myers/ASTR001/L35.html
I would not be surprised if similar phenomena occur in our brains, but
I don't know enough neurophysiology to know.
This reminds me of a pet niggle - it is often said that the time scale
of the radiation from a cosmological sources puts an upper bound on its
size. I don't see that this is _necessarily_ the case, as JB's examples
testify. Here's a more extreme example where we get an impulse from
an arbitrary large object.
(Perhaps not so) famous example #5: The firing squad problem. A line
of identical soldiers of arbitrary length n is to discharge their rifles
simultaneously. They all start in the same initial state, each is
allowed to communicate only with his two neighbours. The soldiers
have k states fixed, finite, and independent of n. (I.e. they
cannot count and n cannot be coded into their transition function)
The sergeant gives the order to fire to a soldier at one end.
Solution left as an exercise to the reader.
There are many generalisations of this which have solutions, circular,
toroidal, etc.
Matt McIrvin <mmci...@world.std.com> wrote under the name "Matt McIvin":
>People report conscious, planned reactions to a stimulus when
>examination of their neural activity seems to indicate a rationalization
>after the fact of a more reflexive response. One example of this is
>familiar. When the doctor hits your knee with a rubber mallet to test
>the patellar reflex, your kick *feels* like an intentional response
>integrated with all of your other thoughts (at least, it does to me--
>when I was a kid I used to be afraid that I was screwing up the doctor's
>test by *willing* my leg to kick out), even though (unless I
>misremember) the decision-making actually happened somewhere in your
>spinal column before your brain heard about it.
I can confirm that the knee reflex is controlled by the spine, not the brain.
I always worry about willing my knee to move during this test,
and, while it usually feels like I never willed anything,
I try to relax before the mallet hits me
to make sure that the response is pure reflex.
I guess I'm being unnecessarily careful.
I always wondered what perverse mechanism in my brain
would make me will my knee to move during the reflex test.
Now I know it's just rationalization, one of the perversest mechanisms of all.
-- Toby
to...@ugcs.caltech.edu
[...]
>Famous example 4: What causes spiral arms of galaxies? One theory,
>the "density wave" theory, suggests they form naturally due to
>gravitational instabilities in the galactic disk. Another, the
>"propagating star formation" model, suggests that formation of big O
>and B type stars gives rise to supernovae whose shock waves compress
>clouds of cold gas and dust, triggering more star formation. In this
>scenario the motion of the spiral arms does not correspond to the
>motion of individual stars, but simply waves of star formation.
Actually, the same is true of the density-wave scenario. The
density wave, just like a water wave, propagates at a different
speed than its constituent particles.
As cool as spiral arms are, I have to admit I don't see how this
example fits into the general scheme you're talking about. What
phenomenon in a spiral galaxy is supposed to be occurring on a faster
time scale than the information transfer time? I don't see one.
>This reminds me of a pet niggle - it is often said that the time scale
>of the radiation from a cosmological sources puts an upper bound on its
>size. I don't see that this is _necessarily_ the case, as JB's examples
>testify.
This reminds me:
Once people were worried about "superluminal jets" of hot gas which
appear to be emitted by quasars. Now, ironically, the standard explanation
involves special relativity: if an object is moving towards you as well as
across the sky, it can *seem* to be moving faster than light if you neglect
its motion towards you, thanks to the finite speed of light:
http://math.ucr.edu/home/baez/physics/superluminal.html
But I believe there was a competing theory at one point, in which
the trick I mentioned was used to weasel out of the problem.
Can you give away the answer to your firing squad puzzle if nobody
answers it after a while? I don't see how to do it.
>The
>density wave, just like a water wave, propagates at a different
>speed than its constituent particles.
Aha.
The website I cited explained the density wave theory of galactic
spiral arms by analogizing them with the density waves occuring
in traffic jams. In both, the idea is that a kind of instability
causes small inhomogeneities to amplify up to a certain size.
(In the case of the density wave theory, I guess this instability
is due to the attractive force of gravity, while in the case of
traffic jams it's due to how thicker traffic moves more slowly.)
It wasn't until after I posted that I realized: since the
density waves in traffic move at a completely different speed
than individual cars, the same might be true in the density
wave theory of spiral arms.
But I guess in both the wave velocity is *slower* than that of
the constituent particles?
>As cool as spiral arms are, I have to admit I don't see how this
>example fits into the general scheme you're talking about. What
>phenomenon in a spiral galaxy is supposed to be occurring on a faster
>time scale than the information transfer time? I don't see one.
Hmm, you're right. Shucks. I guess I was getting carried
away by my hopes for some cool analogies. I hoped that just like
the waves in the Belousov-Zhabotinsky reaction, the galactic waves
might move faster than the information actually propagated...
What's the largest length scale at which one sees waves whose
phase velocity exceeds the information propagation velocity?
Maybe if we form a galactic civilization, certain trends will seem so
spread faster than light, thanks to previously existing correlations
in culture which lead to near-simultaneous developments on different
planets.
One of the problems one has is that bought new, these books have (from
the UK) to be ordered at huge cost. The Feynman lectures are like this
(out of print and $$100's). Doubtless there are convenient low cost
second hand book sales at US universities, however these are out of
reach for me. I will, however, attempt to locate a copy.
Anyone near Oxford (UK) with a cheap copy for sale? Loan?
The Feynman lectures can be bought off the shelf for 27/vol
in bookshops like Blackwells which has its main branch in
Oxford. Weyl's "Symmetry" should also be available but you never
know for sure till you order. There is a 1992 edition from
Princeton UP, priced at 8.50.
By the way, is a pound sign where I am writing it but
I don't know how it will come out elsewhere.
Phil Gibbs
http://www.weburbia.com/ http://www.weburbia.demon.co.uk/
"When all you've got is a hammer, everything looks like a nail"
- Japanese proverb
In article <HeZiBEAD...@upthorpe.demon.co.uk>,
Oz <O...@upthorpe.demon.co.uk> wrote:
>In a reply to article <5q5vco$o...@agate.berkeley.edu> from John Baez
><ba...@math.mit.edu>, Oz wrote:
>====
>JB said:
>U(n) - all n x n unitary complex matrices. U stands for
> "unitary"
>====
>I must be guessing the definition wrongly. To me a 1 x 1 matrix has one
>row and one column, unitary suggests 1 and complex suggests i to give me
>only four members [1], [-1], [i], [-i].
No, that's not U(1), that's the group Z/4 (pronounced "Z mod 4"). This
is like a little clock suitable for a world whose day has only four hours:
i
-1 1
-i
where you count time by starting at 1 and marching around
counterclockwise... I guess the sun goes backwards on this planet.
You get back to where you started after four hours. Actually when
I was in grade school learning math, we did the "new math", as it
was called, and we learned all about "clock arithmetic", meaning
stuff like Z/4 or Z/12 or Z/2 or whatever. Secretly they were teaching
us group theory.
In any event, Z/4 is the group of rotational symmetries of the square:
there are four ways you can turn a square and have it look like it did
before you turned it: you can turn it by 0, 90, 180, or 270 degrees.
But U(1) is much bigger: it's the group of rotational symmetries of
the *circle*!
>However it suddenly occurred to me in the shower a few minutes ago that
>'unitary' could mean an infinite group of magnitude 1 of form (a + bi)
>where a^2+b^2 =1.
This indeed the correct answer. I guess the problem was I didn't tell
you what "unitary" means. The conceptual definition of a unitary matrix
is that it gives a linear transformation of C^n that preserves lengths
and angles. For n = 1 this is just a rotation!
However, this may be nothing but mumbo-jumbo to you, so let me give
you a less conceptual definition of "unitary". This less conceptual
definition will let you take a complex matrix T and check in a completely
mechanical way to see if it's unitary.
First I need to tell you what the "adjoint" of a matrix T is. To take
the adjoint of T, you first take the complex conjugate of each entry,
and then you flip it along the diagonal. So for example if I hand
you the matrix
1+2i 3
i 2-7i
and tell you to take its adjoint, first you take the complex conjugate
of each entry:
1-2i 3
-i 2+7i
and then you flip it along the diagonal as follows:
1-2i -i
3 2+7i
Okay? We denote the adjoint of the matrix T by T*. This adjoint
stuff is crucial in quantum mechanics; it may seem arbitrary and
puzzling but that's because I'm just giving you the cookbook recipe
instead of the conceptual explanation (which as always requires more
mathematical sophistication to really sink ones teeth into).
Now, we say that a complex matrix is unitary if TT* and T*T are
both the identity matrix. Actually, for n x n matrices when n is
finite (the only case we're worrying about here!), if TT* = 1 then
T*T = 1 as well, so we can just say:
T is unitary if TT* = 1.
Let me do an example. You may or may not be able to multiply
matrices in your sleep. Let's take T to be
0 i
-i 0
Then T* is
0 i
-i 0
(Huh, T* just the same as T... so T is "self-adjoint" in this case!)
If we multiply TT* using the usual rules for matrix multiplication, we
get
1 0
0 1
which is the identity matrix 1. So: yup, this matrix T is unitary.
Since it's a 2x2 matrix we say it's a member of U(2). Kids all over
the world have heard of the group U(2). If you don't know what I mean
just ask your son.
But what about U(1)? Let's see what all the members of U(1) are!
Let's take any old 1x1 complex matrix:
a+bi
Hmm. This looks suspiciously like a *complex number*. And
let's take the complex conjugate of each entry:
a-bi
and then flip it across the diagonal:
a-bi
Okay, that's the adjoint. (Hmm, it looks suspiciously like
the complex conjugate of that number we started with.)
Now we take our 1x1 matrix and multiply it by its adjoint:
(a+bi)(a-bi) = a^2 + b^2
We say our 1x1 matrix is unitary if this is the identity, i.e.
the number 1:
a^2 + b^2 = 1.
My gosh, this is just what you wrote down! An element of U(1) is just
a *unit complex number*, that is, one whose magnitude is 1. "Unitary" -
"unit" - what a creepy coincidence!
>A quick check indicated that under multiplication
>these did indeed form a group, presumably the group of all rotations in
>a plane. The identity being (1 + 0i) and the inverse being (a - bi), the
>conjugate.
Yes indeed, you have verified that we have a full-fledged *group* on
our hands here! The group of rotations of the plane, or the group
of symmetries of the circle, if you prefer... though in addition, this
group just *is* a circle! (That's actually no surprise: every group
can be thought of as the group of symmetries of itself; check out
how Z/4 looked like a square, for example.)
Physicists call an element of U(1) a "phase". Now you may or may not
recall a lot of stuff about phases in physics, but phases are deeply
related to electromagnetism, which is ultimately why U(1) is the symmetry
group associated to electromagnetism.
Electromagnetism is all about the circle. Cool, huh?
>I have a nasty feeling that this should nowadays properly be
>expressed in some sort of matrix form.
Well, we actually *have* been working with matrices, namely 1x1
complex matrices.
But you are probably hinting at the fact that rotations of the plane
can also be thought of as 2x2 real matrices. You can think of your
1x1 unitary matrix
a+bi
as the 2x2 real matrix
a -b
b a
if you prefer. People call the group 2x2 real matrices that represent
rotations O(2).
So U(1) is isomorphic to O(2). A profound and basic fact.
>Why do I feel shades of GR formalism creeping in?
Probably because general relativity, like electromagnetism and our
theories of all the other forces in nature, are *gauge theories* ---
they are all about groups and parallel transport and tensors and
stuff like that. Einstein explicitly modeled general relativity on
Maxwell's equations, but general relativity was where people first
really caught on to the whole idea of gauge theories --- electromagnetism
was too simple for people to really notice what was going on, basically
because U(1) is such a puny little group, and it's commutative.
Even now there's no real way of knowing whether the (sub-luminal)
velocities people infer in these objects trace the true bulk
velocities of the jet material or are just `pattern speeds' in the way
you mentioned.
--
Martin Hardcastle Department of Physics, University of Bristol
Be not solitary, be not idle
> It wasn't until after I posted that I realized: since the
> density waves in traffic move at a completely different speed
> than individual cars, the same might be true in the density
> wave theory of spiral arms.
That's the whole point. It keeps the spiral arms from winding up
tighter and tighter, even though the stars are all moving at
different angular speeds. (In fact, they are moving at approximately
equal *linear* speeds, in most spiral galaxies--the best evidence
for dark halo matter, since this is not what you would get from
the gravitation of just the visible matter.)
> Can you give away the answer to your firing squad puzzle if nobody
> answers it after a while? I don't see how to do it.
I think I've worked out a solution (although I haven't actually
computationally tested the fine detail). A description in terms of a
state engine is overly complicated, using 19 states and hence 6859
rules. So I'll describe it in general terms.
When the order to fire is given, send two propagating signals down the
line: one at full speed, one at half speed. The full speed signal
should be reflected when you reach the end of the line.
The signals cross at the center of the line. This might be one
soldier or two (depending on whether there are an even or odd number
of soldiers). We then have two sections of equal length to the left
and right of these middle soldiers.
Repeat the procedure iteratively. After the jth iteration you have
2^j sections, all of equal length. When this length reaches zero, all
the soldiers fire.
This solution takes roughly three times the one-way transit time.
Harry.
---
Harry Johnston, om...@ihug.co.nz
IT WASN'T, he decided, that you couldn't take it with you, because
you could. It was just that there wasn't exactly a superfluity of
things you could spend it on once you got there.
--- Tom Holt, "Djinn Rummy"
[Moderator's note: I would have thought you'd want a factor of three
difference in speed, not a factor of two, so that when the fast signal
has traveled 1.5 times the length of the line, the slow signal has
propagated 0.5 times, and they meet in the middle. With a factor of
two, wouldn't the two signals meet 2/3 of the way down the line? Then
again, maybe I've just misunderstood the whole thing. -TB]
Superluminal jets - you said they were standardly explained by SR. I don't
think this is worth posting, but if I may be permitted to appear to correct
you, this is not really exact. Yes, of course it is accounted for in SR,
but it's not really "the guts of it", which is much simpler, and (as you
said) depends merely on finite light speed. Thus, it is not *really* an
"explanation" by SR, but by Newtonian, (or actually, Roemerian) physics.
It's really just a geometrico-dynamic result whereby the earlier emitted
light is almost caught up by the emitter, so the later emitted light arrives
at us almost straight afterwards, giving the huge apparent angular velocity.
Of course you know this perfectly well; I'm just pointing out it can be
explained *before* calling SR into account. Of course we all know the
pitfalls of "correct explanation of phenomena"! Remember the battle I had
over why the tide bulges *away* from the moon! (NOT due to rotation!)
Finally - the firing squad problem. Yes, it's a great little problem isn't
it!? When I first read of it, I would have been certain it couldn't be
done, had it not appeared as the question - "do it". Actually, I'm rather
surprised you've not seen it - it's a classic CA problem, and I know
you're quite keen on CA? Anyway... If you want to do as much for yourself
as possible, try again. If not, there is a brief hint in the PS.
Cheers, Bill.
P.S. Maybe you could post the SR thing on s.p.r. after all - it seems quite
interesting. I leave it up to you.
P.P.S. Firing squad problem: HINT:
Can you arrange for a "signal" of speed 1 to propagate down the row of
soldiers? Yes, of course you can. So...
What if, as well, you propagate some *slower* signal also... ?
This is rather fun (at this elementary level anyway), another way of
looking at things.
In article <5r32er$5...@agate.berkeley.edu>, john baez
<ba...@math.ucr.edu> writes
>
>In article <HeZiBEAD...@upthorpe.demon.co.uk>,
>Oz <O...@upthorpe.demon.co.uk> wrote:
>
>>However it suddenly occurred to me in the shower a few minutes ago that
>>'unitary' could mean an infinite group of magnitude 1 of form (a + bi)
>>where a^2+b^2 =1.
>
>This indeed the correct answer. I guess the problem was I didn't tell
>you what "unitary" means. The conceptual definition of a unitary matrix
>is that it gives a linear transformation of C^n
Um. C^n??
I get the feeling that expressing my knowledge level of this area as
near zero hardly describes it adequately. Hmmm Would this be the group
of complex numbers? Unh, hang on, that would have to be C^1, C^n
presumably has n dimensions.
>so let me give
>you a less conceptual definition of "unitary". This less conceptual
>definition will let you take a complex matrix T and check in a completely
>mechanical way to see if it's unitary.
>
>First I need to tell you what the "adjoint" of a matrix T is. To take
>the adjoint of T, you first take the complex conjugate of each entry,
>and then you flip it along the diagonal.
Looks fine with a 2x2 matrix, how do you go about it for a 3x3 or even a
3x3x3 matrix? Is it (mechanically) always this simple?
>Now, we say that a complex matrix is unitary if TT* and T*T are
>both the identity matrix. Actually, for n x n matrices when n is
>finite (the only case we're worrying about here!), if TT* = 1 then
>T*T = 1 as well, so we can just say:
>
>T is unitary if TT* = 1.
I note you have switched from additive to product definitions. One of
the few things I remember from reading about sets.
Isn't this a requirement for it to be a group anyway? Aren't you just
saying that the inverse of T is T*? I must have missed something here,
why is the ajoint considered different from the inverse, or is it merely
that this manipulation produces the inverse for these groups. If this is
the case it's clearly another case of mathematicians formalising
something conveniently simple to do (by comparison of course).
---
>Then T* is
>
> 0 i
>
>-i 0
......
>which is the identity matrix 1. So: yup, this matrix T is unitary.
>Since it's a 2x2 matrix we say it's a member of U(2).
Um. Let's play with this a bit. So just sticking with reals (which are
complex really, just simple ones), after an absurd amount of playing
about I eventually find that
[a -(1-a^2)^(.5)] 0<a=<1
[ ]
[(1-a^2)^(.5) a ] are a whole bunch in U(2) as well.
Hang on though, this ought to be more general than just reals. I bet if
a is complex and |a|=1 then it still works.
...... some hours later Oz tries another tack. :-(
Try something more general.
[a+bi -x-yi]
[x+yi a+bi] looks general enough.
Half an hour later some conditions result:
ay=bx and a^2 + b^2 + x^2 + y^2 = 1 (ie 1>=a,b,x,y >0)
So if b=0=y then OK the original condition returns.
If a = (1/2) = b then x=y and x=y=1/2, um, scribble.. oh, it works!
Given all the squaring and rooting I wouldn't want to look out any more.
Expressing (a+bi)^(.5) gets a bit messy.
Is the group in fact continuous? What does it represent, a sphere in 3D?
>Kids all over
>the world have heard of the group U(2). If you don't know what I mean
>just ask your son.
[Poster's Note: He is probably too young to know about U2, currently the
rage amongst his friends is Nirvana. Odd, they are an early 70's group.
OTOH The Gruppoids is a good name for a pop group.]
>But what about U(1)?
.....
>Yes indeed, you have verified that we have a full-fledged *group* on
>our hands here! The group of rotations of the plane, or the group
>of symmetries of the circle, if you prefer... though in addition, this
>group just *is* a circle! (That's actually no surprise: every group
>can be thought of as the group of symmetries of itself; check out
>how Z/4 looked like a square, for example.)
OK, following what I would call a locus. (Except that Z/4 is not
continuous since it isn't infinite.)
>Physicists call an element of U(1) a "phase".
And everyone else calls an angle.
>Now you may or may not
>recall a lot of stuff about phases in physics, but phases are deeply
>related to electromagnetism, which is ultimately why U(1) is the symmetry
>group associated to electromagnetism.
>
>Electromagnetism is all about the circle. Cool, huh?
Er ... but ..... well ..... I don't really see the connection.
Why should I consider EM as a collection of rotations around a circle
and *nothing else*?
There must be some deeper reasoning here.
>But you are probably hinting at the fact that rotations of the plane
>can also be thought of as 2x2 real matrices. You can think of your
>1x1 unitary matrix
>
>a+bi
>
>as the 2x2 real matrix
>
>a -b
>b a
>
>if you prefer.
Cor! You're right! Wow!
Hang on, it's what I worked out (very laboriously) before.
Just didn't see it. Dang. So a 1x1 complex matrix is equivalent to a
particular 2x2 real matrix, neat. So a 2x2 complex matrix is presumably
expressable as a 3x3 real matrix?
That seems to make complex numbers rather 'symmetrical' and non-general.
Or does it mean matrix formulation is more general, I don't know. Of
course one jumps to the conclusion that 2x2 matrices can represent
rotations (of 1x2 vector matrices) not to mention reflections
translations and stretching of all sorts, which of course I knew from
checking my son's exam papers, so really I should have guessed.
>People call the group 2x2 real matrices that represent
>rotations O(2).
>
>So U(1) is isomorphic to O(2). A profound and basic fact.
Presumably 'isomorphic' means each element can be matched one for one
using the same 'conversion'?
Another hang on. U(2) was a complex group, but the form was exactly that
of U(1) without the imaginary part. So I would expect (do NOT ask me to
prove it) that U(1) is a special form of U(2). So U(1) must be in U(2).
Come to that O(3) ought to be isomorphic with U(2).
No?
........
>really caught on to the whole idea of gauge theories --- electromagnetism
>was too simple for people to really notice what was going on, basically
>because U(1) is such a puny little group, and it's commutative.
No, no, no. You can't stop there, it's immoral.
Having just whetted my appetite, got some thought processes (however
elementary) going, you stop! AAaarrrggghhh!!!
I suspect that U(1) may be too simple to bring out the points of how and
why these SU(3)'s and so on are a good way of looking at things, but I
just gotta have some idea how it all fits together.
>The Feynman lectures can be bought off the shelf for 27/vol
>in bookshops like Blackwells which has its main branch in
>Oxford.
Actually earlier this year they quoted me 70+UKP each, and they had to
come from the US. What sort of price would they be second hand in the US
and how easily would they obtained? I ask because as we speak my son
(age 15) is in LA, close to Laguna beach and with suitable bribes might
be induced to get them for me, if only he knew a good place to go.
>Weyl's "Symmetry" should also be available but you never
>know for sure till you order. There is a 1992 edition from
>Princeton UP, priced at 8.50.
True, there is. Ordered, should be here tomorrow. Thanks.
>>The conceptual definition of a unitary matrix
>>is that it gives a linear transformation of C^n [...]
>Um. C^n??
>
>I get the feeling that expressing my knowledge level of this area as
>near zero hardly describes it adequately.
This is why I immediately proceeded to give you a less conceptual
definition. Feel free to ignore my slight nods to the sophisticates
who are watching this show with amused condescension verging on
boredom. However, you simply *have* to learn about C^n if you're
going to mess with group theory and the like.
>Hmmm Would this be the group
>of complex numbers? Unh, hang on, that would have to be C^1, C^n
>presumably has n dimensions.
R means the real numbers, C means the complex numbers.
R^n means n-tuples of real numbers: guys like (x_1,...,x_n) where
all the x_i are real. In short: the vectors you know and love!
Similarly, C^n means n-tuples of complex numbers. These are
more important in quantum mechanics.
Quite often, if you have some set X (like the set of real numbers
or octonions or whatever), we use X^n to denote the set of n-tuples
of guys in X.
However, in mathematical notation, just when you think you get what's
going on, they play a joke on you. For example, S^n does not mean
n-tuples of guys in some mysterious set S; it means the "n-sphere",
the unit sphere in n+1 dimensions, defined by the equation
x_1^2 + .... + x_{n+1}^2 = 1
Keep this in mind for later on in this post when I'll throw S^3 at you.
>>First I need to tell you what the "adjoint" of a matrix T is. To take
>>the adjoint of T, you first take the complex conjugate of each entry,
>>and then you flip it along the diagonal.
>
>Looks fine with a 2x2 matrix, how do you go about it for a 3x3 or even a
>3x3x3 matrix? Is it (mechanically) always this simple?
First, we are talking *matrices* here, not *tensors*, so we
have nxm ones, but we don't have any 3x3x3 ones or other such
monstrosities. Secretly, a matrix is a tensor of the form T^i_j.
But let's pretend you never learned about tensors, okay? A matrix
is then just a box of numbers:
9 8 1 9+i
0 2 3 1
0 1 4 2
although we allow complex ones too.
Second, yeah, to take the adjoint you just conjugate each
entry and then "transpose" the whole thing (which means flip
it across the diagonal). So if the above one is T, then T* is
9 0 0
8 2 1
1 3 4
9-i 1 2
>>Now, we say that a complex matrix is unitary if TT* and T*T are
>>both the identity matrix. Actually, for n x n matrices when n is
>>finite (the only case we're worrying about here!), if TT* = 1 then
>>T*T = 1 as well, so we can just say:
>>
>>T is unitary if TT* = 1.
>
>I note you have switched from additive to product definitions. One of
>the few things I remember from reading about sets.
Do you mean groups? Anyway, yeah, the groups like U(n) and O(n)
and SU(n) and SO(n) are written multiplicatively, since the binary
operation in these groups is matrix multiplication.
>Isn't this a requirement for it to be a group anyway?
Huh? I was just telling you what a unitary matrix was,
I wasn't saying anything about groups.
>Aren't you just saying that the inverse of T is T*?
I'm saying that a matrix T is defined to be "unitary" if its inverse
is T*.
>I must have missed something here,
>why is the adjoint considered different from the inverse, [....]
Huh?? The adjoint isn't "considered" different from the
inverse, it *IS* different from the inverse --- UNLESS your
matrix has the wonderful special property of being unitary.
Take the 1x1 matrix
2
for example. It's adjoint is
2
while its inverse is
1/2
The adjoint is not the inverse! You've got to be lucky --- you've
got to be UNITARY --- for your adjoint to be your inverse. The
adjoint of
i
is
-i
while the inverse of it is
-i
as well, so i is a unitary 1x1 matrix. Yes, it's in U(1)! Well,
we've already been through this; we worked out exactly who was
and who was not in U(1). In general, for an nxn complex matrix
to be in U(n) is a very special property.
>If this is
>the case it's clearly another case of mathematicians formalising
>something conveniently simple to do (by comparison of course).
No, it clearly another case of a nonmathematician not carefully
reading what the mathematician actually wrote.
>Um. Let's play with this a bit. So just sticking with reals (which are
>complex really, just simple ones), after an absurd amount of playing
>about I eventually find that
>
>[a -(1-a^2)^(.5)] 0<a=<1
>[ ]
>[(1-a^2)^(.5) a ] are a whole bunch in U(2) as well.
Yes indeed.
>Hang on though, this ought to be more general than just reals. I bet if
>a is complex and |a|=1 then it still works.
No.
>...... some hours later Oz tries another tack. :-(
Well, at least you got some practice in matrix multiplication, which
is always good for the soul. I have multiplied more 2x2 matrices in
my day than you could shake a stick at! Usually doing silly computations
that don't work, like that one you just did. As the payoff for all my
labors, I can take one look at that matrix up there and say "if a is
complex, there's no reason at all for that to be unitary".
One reason being that 0<=a<=1 is meaningless when a is complex. :-)
>Try something more general.
>
>[a+bi -x-yi]
>[x+yi a+bi] looks general enough.
My gosh, Oz, how did you dream up that special form? If I didn't know
better I'd guess you were reading books about Pauli matrices! It's
pretty good, but it seems a little bit off; I think something like
this would be better:
[a+bi -x+yi]
[x+yi a-bi]
If we take the adjoint of this we get
[a-bi x-yi]
[-x-yi a+bi]
and if we multiply the former by the latter we get
[a^2 + b^2 + x^2 + y^2 0]
[0 a^2 + b^2 + x^2 + y^2]
which is the identity if and only if
a^2 + b^2 + x^2 + x^2 = 1.
>Half an hour later some conditions result:
>
>ay=bx and a^2 + b^2 + x^2 + y^2 = 1 (ie 1>=a,b,x,y >0)
Hmm, that's pretty close to what I get when I do it my way, but that
extra equation ay=bx is yucky. That's why I like my way better
(though it's not really "my way"; it's a famous old fact).
>Is the group in fact continuous? What does it represent, a sphere in 3D?
More like a sphere in 4D, right? After all, we've got 4 coordinates
a,b,c,d. The way I set things up, we get *exactly* a sphere in 4D. That
extra equation of yours, ay=bx, sort of messes things up....
So, yeah, if we do things my way, what we've got here here is precisely
S^3! Remember S^3?
However, not all matrices in U(2) are of the form
[a+bi -x+yi]
[x+yi a-bi]
where a^2 + b^2 + x^2 + x^2 = 1. To see why, work out the
determinant of this matrix.
Honestly, do it! You'll learn some wonderful things if you do, I promise
you. You will see that there is some group out there which is exactly
S^3... not U(2), but a slightly smaller group.
Do it! This is seriously cool stuff!
>>Kids all over
>>the world have heard of the group U(2). If you don't know what I mean
>>just ask your son.
>[Poster's Note: He is probably too young to know about U2, currently the
>rage amongst his friends is Nirvana. Odd, they are an early 70's group.]
Umm, Nirvana wasn't around in the early 70s on the planet I come from.
However, it is a bit odd that your son's crowd is into Nirvana at this
point. Hasn't the news gotten across the Atlantic yet that Kurt Cobain
died some time ago?
But I digress. This is sci.physics.research, a dignified forum for
the discussion of research-level physics.
>>Physicists call an element of U(1) a "phase".
>
>And everyone else calls an angle.
Really? Hmm.
>>Now you may or may not
>>recall a lot of stuff about phases in physics, but phases are deeply
>>related to electromagnetism, which is ultimately why U(1) is the symmetry
>>group associated to electromagnetism.
>>
>>Electromagnetism is all about the circle. Cool, huh?
>
>Er ... but ..... well ..... I don't really see the connection.
The *connection*? Ha-ha! Great pun! How devilishly sneaky of you,
Oz.
>Why should I consider EM as a collection of rotations around a circle
>and *nothing else*?
Because it's so incredibly cool that all of electricity, magnetism,
light, radio waves, arises naturally when you assume that there is a
U(1) gauge symmetry of the laws of physics, meaning a separate copy of
U(1) at every single point of spacetime, and you write down the simplest
equation that involves such a symmetry...
and similarly for SU(2) x U(1) you get the laws governing the
electroweak force, combining electromagnetism and the weak force...
and for SU(3) you get the laws governing the strong force...
and similar (but importantly different) principles give you general
relativity when you start with the Lorentz group, SO(3,1).
>There must be some deeper reasoning here.
Whew, *that* is an understatement if I ever saw one!
>>You can think of your 1x1 unitary matrix
>>
>>a+bi
>>
>>as the 2x2 real matrix
>>
>>a -b
>>b a
>>
>>if you prefer.
>
>Cor! You're right! Wow!
Cor?
>Just didn't see it. Dang. So a 1x1 complex matrix is equivalent to a
>particular 2x2 real matrix, neat. So a 2x2 complex matrix is presumably
>expressable as a 3x3 real matrix?
Don't add one --- double! An nxn complex matrix may be thought of as a
special sort of 2n x 2n real matrix, basically because a complex number
is a pair of reals. Or using your newfound lingo, C = R^2, so
C^n is R^{2n}, so a linear transformation of C^n is a special sort of
linear transformation of R^{2n}.
However, there are special cool things about low dimensions, like how
U(1) is *exactly all of* O(2) --- this doesn't work in higher
dimensions. Usually U(n) is contained in O(2n), but not nearly the
whole thing.
>>So U(1) is isomorphic to O(2). A profound and basic fact.
>
>Presumably 'isomorphic' means each element can be matched one for one
>using the same 'conversion'?
Umm, yeah, basically.
>Another hang on. U(2) was a complex group, but the form was exactly that
>of U(1) without the imaginary part. So I would expect (do NOT ask me to
>prove it) that U(1) is a special form of U(2). So U(1) must be in U(2).
>Come to that O(3) ought to be isomorphic with U(2).
>
>No?
No. I don't follow this at all, though your final conclusion is
reminiscent of an interesting fact about SO(3) and SU(2).
>>[...] electromagnetism
>>was too simple for people to really notice what was going on, basically
>>because U(1) is such a puny little group, and it's commutative.
>
>No, no, no. You can't stop there, it's immoral.
>
>Having just whetted my appetite, got some thought processes (however
>elementary) going, you stop! AAaarrrggghhh!!!
Yes. Never to post again. Unless you compute the determinant of that
matrix up there. I'm going to make you *sweat* for every little bit
of enlightenment, I'm afraid.
>I suspect that U(1) may be too simple to bring out the points of how and
>why these SU(3)'s and so on are a good way of looking at things, but I
>just gotta have some idea how it all fits together.
Yes, U(1) is a bit too babyish to see how the whole picture works,
but it certainly is fun to see how Maxwell's equations pop out.
etc etc.
<snip of important stuff to control thread length>
>>I note you have switched from additive to product definitions. One of
>>the few things I remember from reading about sets.
>
>Do you mean groups?
Nah. Never got as far as that. Sets was Ch 1, groups about Ch 4. (Fib)
All those funny symbols and all. I mean, really, the set of all people
that speak French, have two children and eat oranges.
[Discussion moves to adjoints and unitary matrices]
>Huh? I was just telling you what a unitary matrix was,
>I wasn't saying anything about groups.
Eh? Oooops! So you were. Um, I hadn't really taken that in.
So
>>I must have missed something here,
>>why is the adjoint considered different from the inverse, [....]
I drop myself right in it.
[Painfully simple illustration of why typically adjoints are not
inverses, as befits the apparent IQ of the recipient.]
>as well, so i is a unitary 1x1 matrix. Yes, it's in U(1)! Well,
>we've already been through this; we worked out exactly who was
>and who was not in U(1). In general, for an nxn complex matrix
>to be in U(n) is a very special property.
OK, right. Is it a necessary and sufficient (gosh my old math teacher
will be pleased this one surfaced after 30 yrs) condition that the
adjoint should be the inverse for U(n)?
>>If this is
>>the case it's clearly another case of mathematicians formalising
>>something conveniently simple to do (by comparison of course).
>
>No, it clearly another case of a nonmathematician not carefully
>reading what the mathematician actually wrote.
No, it's clearly yet another case of a mathematician changing tack
assuming a normal mortal will pick it out whilst his head is spinning
with new concepts and great gobs of infinite complex sets of matrices
impeding his mental equipment. Harrumph. ..... Your turn.
>after an absurd amount of playing
>>about I eventually find that
>>
>>[a -(1-a^2)^(.5)] 0<a=<1
>>[ ]
>>[(1-a^2)^(.5) a ] are a whole bunch in U(2) as well.
.........
>>Try something more general.
>>
>>[a+bi -x-yi]
>>[x+yi a+bi] looks general enough.
>
>My gosh, Oz, how did you dream up that special form? If I didn't know
>better I'd guess you were reading books about Pauli matrices!
Hahahahaha!! What a lovely idea.
However it was me being mindless yet again if you chop out some
intervening text as I have above.
> I think something like
>this would be better:
>
>[a+bi -x+yi]
>[x+yi a-bi]
Damn. I should have spotted this. Obviously lack of adequate experience
with matrices and complex numbers.
>>Is the group in fact continuous? What does it represent, a sphere in 3D?
>
>More like a sphere in 4D, right? After all, we've got 4 coordinates
>a,b,c,d. The way I set things up, we get *exactly* a sphere in 4D.
Ah, how very convenient. Four dimensions is a tad tricky to deal with,
but here we can do it with a 2D complex 2x2 matrix. Heck, you can even
write it on 2D paper. I can see some potential utility for this
structure.
>So, yeah, if we do things my way, what we've got here here is precisely
>S^3! Remember S^3?
Ahh. The one you will catch me out on in a post or two.
"it means the "n-sphere", the unit sphere in n+1 dimensions, defined by
the equation x_1^2 + .... + x_{n+1}^2 = 1"
Mind you, why not S(4) since it has 4 dimensions?
Glory me, you aren't going to have one dimension 'special' are you?
>However, not all matrices in U(2) are of the form
>
>[a+bi -x+yi]
>[x+yi a-bi]
>
>where a^2 + b^2 + x^2 + x^2 = 1. To see why, work out the
>determinant of this matrix.
Determinants! I've only ever done two in my life. I need a book ...
Ah, my son's GCSE maths book gives a crib. OK let's go.
det= (a+bi)(a-bi) + (x-yi)(x+yi) = a^2 + b^2 + x^2 + y^2 = 1 (see above)
OK so this group (unproven, but there you go) has determinant = 1.
Now, where have I heard that before? Aren't determinants connected with
inverses? Ah, maybe the inverse is the ajoint for this bunch.
OK, so a complex matrix is unitary iff T*T=1 hmmm.
We have (using my son's book) that the inverse of (*=conjugate)
[a -b*] _1_ [a* b*]
T = [b a*] is det [-b a ] but det = 1
and the adjoint of T is
[a* b*]
[-b a ] so there you go!
>Honestly, do it! You'll learn some wonderful things if you do, I promise
>you. You will see that there is some group out there which is exactly
>S^3... not U(2), but a slightly smaller group.
Hang on let's look at definitions again.
>====
>JB said:
>U(n) - all n x n unitary complex matrices. U stands for
> "unitary"
Now, we say that a complex matrix is unitary if TT* and T*T are
both the identity matrix. Actually, for n x n matrices when n is
finite (the only case we're worrying about here!), if TT* = 1 then
T*T = 1 as well, so we can just say:
T is unitary if TT* = 1. (T* being the *adjoint* of T)
>====
But the only difference between U(2) and S(3) seems to be that the
adjoint is the inverse which is only the case if det(T)=1. I keep
getting the feeling that det(T) should have something to do with
magnitude in some way. Let's look at our simple matrix representation of
a complex number.
[a -b]
V = [b a] has det(V) = a^2 + b^2, which looks like a magnitude^2.
One might plausibly guess that V/(det)^.5 looks something like a unit
vector.
Unfortunately I don't have a clue what I am on about, but you may
possibly have guessed that already.
I think it would help if I could find a member of U(2) that wasn't in
S(3), for some reason I just can't find an appropriate structure.
Anyway, it's plausible (likely, even) that there are elements in U(2)
that aren't in S(3).
>Do it! This is seriously cool stuff!
Dun it, but I am missing something.
[Continuing thread section of vital importance]
>Umm, Nirvana wasn't around in the early 70s on the planet I come from.
Hmmm, which planet is that? Oz controls himself manfully.
Funny, I am sure that somewhere I have a vinyl with "Nirvana" written on
it. As I remember, they hadn't really properly learned how to play, not
that that was important in the early 70's.
>However, it is a bit odd that your son's crowd is into Nirvana at this
>point. Hasn't the news gotten across the Atlantic yet that Kurt Cobain
>died some time ago?
I think that was the point. Otherwise they probably wouldn't have heard
of them. Doubtless at some point they will come across The Doors.
>But I digress. This is sci.physics.research, a dignified forum for
>the discussion of research-level physics.
Oooops ........
>>>Electromagnetism is all about the circle. Cool, huh?
>>
>>Er ... but ..... well ..... I don't really see the connection.
>
>The *connection*? Ha-ha! Great pun! How devilishly sneaky of you,
>Oz.
Ah.
I get the impression I said something smart.
I would feel ever so much better if I knew what it was.
The only connection (mathematically) I have come across was in the GR
Tut, but this was handed down from on high and The Wiz positively
refused to explain how he got it. Turned me into an intestinal parasite
for looking, if I remember correctly.
>Because it's so incredibly cool that all of electricity, magnetism,
>light, radio waves, arises naturally when you assume that there is a
>U(1) gauge symmetry of the laws of physics, meaning a separate copy of
>U(1) at every single point of spacetime, and you write down the simplest
>equation that involves such a symmetry...
>
>and similarly for SU(2) x U(1)
>and for SU(3)
>and SO(3,1).
I *DO* hope a glimmer (at least) of this ends up being explained.
.........
>Cor?
London slang for :
"God, blind me" (an oath made if one is suspected of being untruthful).
Used when making a particularly unlikely (but true) statement.
Later transferred to the recipient's reply (with a ?).
Eventually abbreviated to:
"Gor, blimey". (See WW2 UK films with trusty cockney private).
or
"Cor, blimey".
And further abbreviated as "Cor".
Best said with strong cockney accent.
An expression of astonishment equivalent to "Goodness me", but earthier.
.......
>Yes, U(1) is a bit too babyish to see how the whole picture works,
>but it certainly is fun to see how Maxwell's equations pop out.
How on earth does Maxwell come out of a group U(1)?
It's just not plausible.
I hope this thread gets at lest that far in the fullness of time.
>That's the whole point. It keeps the spiral arms from winding up
>tighter and tighter, even though the stars are all moving at
>different angular speeds.
That is a good explanation of the non-winding up of spiral arms but =
raises the
question... what is the "orbital" period of the shock wave in the galaxy?=
The
galaxy itself rotates in approximately 250 million years but I have never=
seen
an estimate of the shock wave period. Of course because there are two =
spiral
arms we would get hit by the shock fronts twice in each synodic =
revolution.
There have been many possible explanations put forward for the 27 million=
year
extinction event cycle, one of these is the galactic plane crossing by =
the solar
system (which is estimated at ~30 MY). However a shock wave (which sets =
off
supernova) would be reasonable candidate and the following frequency
relationship would hold.
Let fgr be frequency of galactic rotation of solar system ~1/(250 MY).
Let fex be the frequency of extinction events ~1/(27 MY).
Let fgw be the frequency of the galactic wave rotation.
Then fex is equal to 2*fgw - fgr and so we can deduce that fgw would need=
to be
~49 MY if the supposition is correct.
I am guessing that the spiral arms are about 5 to 8 kPc apart and so the =
shock
wave would need to be travelling at 100 to 160 km/s to achieve this rate =
of
"rotation". Does this sound reasonable? Does anyone have better numbers=
for
the speed of shock waves or the distance between spiral arms?
-- Ray Tomes -- rto...@kcbbs.gen.nz -- Cycles & Harmonics Theory --
http://www.kcbbs.gen.nz/users/rtomes/rt-home.htm
subscribe to cycles-...@esosoft.com interdisciplinary cycles list
On 21 Jul 1997, Oz wrote:
> In article <5q6ioj$1...@agate.berkeley.edu>, Michael Weiss
> <colu...@opengroup.org> writes
> >Oz fears:
> >
> > Hmmmm. Why do I get the feeling that I wouldn't be able to follow said
> > book even if I had it. However one has the feeling that one could
> > appreciate the concept without full understanding of the maths with a
> > little help.
> >
> >the book in question being Weyl's _Symmetry_.
If I might just barge in for a moment, I would like to share a few random
notes drawn from a couple of Weyl's beautiful little books.
Weyl, Symmetry, p. 133-4
"The two great events in twentieth century physics are the rise of
relativity theory and of quantum mechanics. Is there also some connection
between quantum mechanics and symmetry? Yes indeed. Symmetry plays a
great role in ordering the atomic and molecular spectra, for the
understanding of which the principles of quantum mechanics provide the
key. An enormous amount of empirical material concerning the spectral
lines, their wave lengths, and the regularities in their arrangements had
been collected before quantum mechaics scored its first success; this
success consisted in deriving the law of the so-called Balmer series in
the spectrum of the hydrogen atom and in showing how the characteristic
constant entering into that law is related to charge and mass of the
electron and Planck's famous constant of action h. . . . It turned out
that, once these foundations had been laid, symmetry could be of great
help in elucidating the general character of the spectra."
(Incidentally, we know that part of the spectrum, the visible portion, is
regularly related to another spectrum, which we in the physics community
commonly refer to as the rainbow. Concerning its colors, Weyl says in one
place, in a direct echo of Newton:)
"It is easily seen that such a quality as "green" has an existence only
as the correlate of the sensation "green" associated with an object given
by perception, but that it is meaningless to attach it as a thing in
itself to material things existing in themselves. This recognition of the
subjectivity of the qualities of sense is found in Galilei (and also in
Descartes and Hobbes) in a form closely related to the principle
underlying the constructive mathematical method of our modern physics
which repudiates "qualities". According to this principle, colours are
"really" vibrations of the aether, i.e. motions."
Now, of course if Weyl had read his Leibniz more carefully, he would have
understood that the parable of the mill still obtains, but he seems to
have dimly intuited something of the sort, for in another place he
rambles on in an openly philosophical manner:
(Mind & Nature, p. 18)
"Mathematics has introduced the name isomorphic representation for the
relation which according to Helmholtz exists between objects and their
signs.26 I should like to carry out the precise explanation of this
notion between the points of the projective plane and the color qualities
. . . On the one side, we have a manifold S 1 of objects-the points of a
convex section of the projective plane-which are bound up with one
another by certain fundamental relations R, R', . . .; here, besides the
continuous connection of the points, it is only the one fundamental
relation: "The point C lies on the segment AB. " In projective geometry
no notions occur except such as are defined on this basis. On the other
side, there is given a second system S2 of objects-the manifold of
colors-within which certain relations R, R', . . . prevail which shall be
associated with those of the first domain by equal names, although of
course they have entirely different intuitive content. Besides the
continuous connection, it is here the fundamental relation: "C arises by
a mixture from A and B"; let us therefore express it somewhat strangely
by the same words we used in projective geometry: "The color C lies on
the segment joining the colors A and B." If now the elements of the
second system S2 are made to correspond to the elements of the first
system S 1 in such a way, that to elements in S 1 for which the relation
R, or R', or . . . holds, there always correspond elements in S2 for
which the homonymous relation is satisfied, then the two domains of
objects are isomorphically represented on one another. In this sense the
projective plane and the color continuum are isomorphic with one another.
Every theorem which is correct in the one system S 1 is transferred
unchanged to the other S2. A science can never determine its subject
matter except up to an isomorphic representation. The idea of isomorphism
indicates the self-understood, insurmountable barrier of knowledge. It
follows that toward the "nature" of its objects science maintains
complete indifference.27 This for example what distinguishes the colors
from the points of the projective plane one can only know in immediate
alive intuition. . . .
These somewhat anticipatory speculations were brought about by the lack
of similarity which prevails between the physical colors (!) and the
processes excited by them on the retina, and by the Helmholtz "sign
theory" of sensations, which we abstracted from it. The processes on the
retina produce excitations which are conducted to the brain in the optic
nerves, maybe in the form of electric currents. Even here we are still in
the real sphere. But between the physical processes which are released in
the terminal organ of the nervous conductors in the central brain and the
image which thereupon appears to the perceiving subject, there gapes a
hiatus, an abyss which no realistic conception of the world can span. It
is the transition from the world of being to the world of appearing image
or of consciousness. Here we touch the enigmatic twofold nature of the
ego, namely that I am both: on the one hand a real individual which
performs real psychical acts, the dark, striving and erring human being
that is cast out into the world and its individual fate; on the other
hand light which beholds itself, intuitive vision, in whose consciousness
that is pregnant with images and that endows with meaning, the world
opens up. Only in this "meeting" of consciousness and being both exist,
the world and I."xlv
Unsatisfied, he dithers on, relentlessly:
M&N, pp. 8-10)
"Monochromatic light is completely determined as to its quality by the
wave-length, because its oscillation law with regard to time and its wave
structure have a definite simple mathematical form which is given by the
function sine or cosine. Every physical effect of such light is completely
determined by the wave-length together with the intensity. To
monochromatic light corresponds in the acoustic domain the simple tone.
Out of different kinds of monochromatic light composite light may be
mixed, just as tones combine to a composite sound. This takes place by
superposing simple oscillations of different frequency with definite
intensities. The simple color qualities form a one-dimensional manifold,
since within it the single individual can be fixed by one continuously
variable measuring number, the wave-length. The composite color qualities,
however, form a manifold of infinitely many dimensions from the physical
point of view. For the complete description of a compound color requires
the indication with which intensity each of the infinitely possible
wavelengths l is represented; so that it involves infinitely many
independently variable quantities Jl. In contrast hereto-what dearth in
the domain of visually perceived colors! As Newton alread made evident by
his color-disk, they form only a two-dimensional manifold. . . . This
discrepancy between the abundance of physical "color chords" and the
dearth of the visually perceived colors must be explained by the fact that
very many physically distinct colors release the same process in the
retina and consequently produce the same color sensation. By parallel
projection of space on to a plane, all space points lying on a projecting
ray are made to coincide in the same point on a plane; similarly this
process performs a kind of projection of the domain of physical colors
with its infinite number of dimensions on to the two-dimensional domain of
perceived colors whereby it causes many physically distinct colors to
coincide. . . .
It seems useful to me to develop a little more precisely the "geometry"
valid in the two-dimensional manifold of perceived colors. For one can do
mathematics also in the domain of these colors. The fundamental operation
which can be performed upon them is mixing: one lets colored lights
combine with one another in space . . ."lxxii
(It is rather as though RG &B formed the basis vectors of some weird
space, known to us only in immediate perception.)
What is Weyl's place in the pantheon of high brows? Let's give a listen
to the following author, whose name I cannot place at this moment but who
I seem to remember is frightfully respectable.
""Everything that goes on in spacetime has its geometric description, and
almost every one of these descriptions lends itself to ready
generalization from flat spacetime to curved spacetime. The greatest of
the differences between one geometric object and another is its scope:
the individual object (vector) for the momentum of a certain particle at a
certain phase in its history, as contrasted to the extended geometric
object that describes the electromagnetic field defined throughout space
and time ("antisymmetric second-rank tensor field" or, more briefly, field
of 2-forms"). The idea that every physical quantity must be describable by
a geometric object, and that the laws of physics must be expressible as
geometric relationships between these geometric objects, had its
intellectual beginnings in the Erlanger program of Felix Klein (1872),
came closer to physics in Einstein's "principle of general covariance" and
in the writings of Hermann Weyl (1925), seems to have first been
formulated clearly by Veblen and Whitehead (1932), and today pervades
relativity theory, both special and general. A. Nijenhuis (1952) and S.-S.
Chern (1960, 1966, 1971) have expounded the mathematical theory of
geometric objects. But to understand or do research in geometrodynamics,
one need not master this beautiful and elegant subject. One need only know
that geometric objects in spacetime are entities that exist independently
of coordinate systems or reference frames. . . ."ciii
(MTW's Gravitation, probably.)
"A rose by any other name ..." as someone said. Can a sweet smell be a
vector invariant? The question is patently weird and therefore (I submit!)
possibly interesting in its implications.
In article <5r6g6a$b...@agate.berkeley.edu>, ba...@math.mit.edu (John Baez)
writes:
|However, there are special cool things about low dimensions, like how
|U(1) is *exactly all of* O(2) ---
No, SO(2).
Keith Ramsay There is nothing on this earth, and little beyond it,
kra...@aol.com that nobody ever denounces. -- Matt McIrvin
On 24 Jul 1997, John Baez wrote:
> >>Electromagnetism is all about the circle. Cool, huh?
> >
> >Er ... but ..... well ..... I don't really see the connection.
>
> The *connection*? Ha-ha! Great pun! How devilishly sneaky of you,
> Oz.
>
> >Why should I consider EM as a collection of rotations around a circle
> >and *nothing else*?
Yeah, c'mon John we want the goodies!
[...snip...]
> >>[...] electromagnetism
> >>was too simple for people to really notice what was going on, basically
> >>because U(1) is such a puny little group, and it's commutative.
> >
> >No, no, no. You can't stop there, it's immoral.
> >
> >Having just whetted my appetite, got some thought processes (however
> >elementary) going, you stop! AAaarrrggghhh!!!
I agree with Oz here.
[...snip...]
>
> Yes, U(1) is a bit too babyish to see how the whole picture works,
> but it certainly is fun to see how Maxwell's equations pop out.
I want to see how Maxwell's equations "pop out". It's good practice for
you! What was that thing that Feynman (I think) said? Something like, "If
you can't explain it with tables, chairs and beer mugs, you don't
understand it." (Maybe I should look up that quote.)
>What is Weyl's place in the pantheon of high brows?
Pretty far up there. He was one of the first people to really
understand the importance of group theory for physics. I believe his
book on the classical groups (SL(n), SU(n), SO(n), Sp(n) and the like)
was the first to put a huge wad of old stuff on invariant theory and
Young diagrams into its proper group-theoretic context. He also did a
lot of fundamental work that applies to general Lie groups, such as the
"Weyl character formula". The modern definition of a vector space first
appears in his book "Space, Time, Matter"! I also believe he invented
the term "gauge theory".
As always, my knowledge of history is far shakier than that of math
or physics --- all I ever retain from my readings is an impressionistic
blur --- so I hope someone like Michael Weiss corrects and amplifies
the above paragraph.
Two good places to read about the effect Weyl had on math and physics
are the "Symposium on the Mathematical Heritage of Hermann Weyl"
and the "Hermann Weyl Centenary Lectures".
Personally, what I like about Weyl is his vision of the unity of
mathematics and physics, his deep appreciation of the power of
symmetry, and his understanding of the role of philosophy in
science. He was the exact opposite of a technical specialist
grinding away on one little corner of his speciality. He saw the
grand sweep of things --- and told us about it!
>I want to see how Maxwell's equations "pop out". It's good practice for
>you!
Actually, I got a lot of practice when I wrote a book on this subject,
called "Gauge Fields, Knots and Gravity". If you're in a hurry to see
Maxwell's equations pop out of U(1) gauge theory, you can just read
that!
Of course, one can summarize it in a single sentence: "The curvature F
of a U(1) connection is the electromagnetic field, and if we take the
Lagrangian to be tr(F^2), we get Maxwell's equations."
However, that's a summary, not an explanation.
You mention Feynman saying that you only really understand something
if you can explain it by moving beer mugs around on the table and that
sort of thing. If you want to see how he explains quantum
electrodynamics in terms of U(1), see his book "QED: Strange Theory of
Light and Matter". If you pay careful attention, you'll see he is
always talking about circles and complex numbers and phases and
things, which is his way of talking about U(1). A phase, or unit
complex number, is the same as an element of U(1).
There he is talking about the *quantum* version of Maxwell's equations.
If you're just interested in the *classical* version, it's good to start
this way. What the vector potential does is tell a particle how its
phase changes when you move it along a curve in spacetime.
"Hey, I thought we were talking *classical* electromagnetism! What's
this phase stuff doing here?"
Good point: it's easiest to understand this phase stuff quantum mechanically,
but we can consider, if we wish, the phase of a *quantum* particle moving
through a *classical* electromagnetic field. This is called a "semiclassical
approximation". While it's a bit schizophrenic, it's tremendously useful
as an approximation.
Okay, so: move your particle around a little square in the xy plane.
Say the sides of the square are length epsilon. Its phase gets multiplied
by approximately
exp(ik epsilon^2)
where the number c depends on the electromagnetic field where your
little square is. What's that number k? It's the charge of your
particle times the Z COMPONENT OF THE MAGNETIC FIELD!
If we used a little square in yz plane we'd similarly get an answer
involving the X COMPONENT OF THE MAGNETIC FIELD.
If we used a little square in zx plane we'd similarly get an answer
involving the Y COMPONENT OF THE MAGNETIC FIELD.
What about if we move it around a little square in the xt plane,
where t is the time axis? This is a little harder to do (I can
explain how if you can't figure out how), but if you do it, you
get the X COMPONENT OF THE ELECTRIC FIELD!
If we used a little square in yt plane we'd similarly get an answer
involving the Y COMPONENT OF THE MAGNETIC FIELD.
If we used a little square in zt plane we'd similarly get an answer
involving the Z COMPONENT OF THE MAGNETIC FIELD.
Those are all the choices.
Now, I haven't explained where Maxwell's equations come from,
but I've explained how the electromagnetic field comes out of
U(1), which is a first step.
I wrote Oz a letter, which he says "would make JB feel less alone!"
if I were to post it.
=================================
>Um. C^n??
The space of n-tuples of complex numbers, (z1,...,zn).
>>First I need to tell you what the "adjoint" of a matrix T is. To take
>>the adjoint of T, you first take the complex conjugate of each entry,
>>and then you flip it along the diagonal.
>
>Looks fine with a 2x2 matrix, how do you go about it for a 3x3
The diagonal is longer, of course, but...
[a b c]
[d e f]
[g h k]
goes to
[a* d* g*]
[b* e* h*]
[c* f* k*]
>or even a
>3x3x3 matrix? Is it (mechanically) always this simple?
Matricies in this context are square, not cubical.
>Isn't this a requirement for it to be a group anyway? Aren't you just
>saying that the inverse of T is T*?
That's what he's saying.
>I must have missed something here,
>why is the ajoint considered different from the inverse, or is it merely
>that this manipulation produces the inverse for these groups.
The latter. The matrix
[1 1]
[0 1]
has an inverse
[1 -1]
[0 1]
and an adjoint
[1 0]
[1 1]
and these are not the same.
>If this is
>the case it's clearly another case of mathematicians formalising
>something conveniently simple to do (by comparison of course).
Yes, it's relatively simple.
>Um. Let's play with this a bit. So just sticking with reals (which are
>complex really, just simple ones), after an absurd amount of playing
>about I eventually find that
>
>[a -(1-a^2)^(.5)] 0<a=<1
>[ ]
>[(1-a^2)^(.5) a ] are a whole bunch in U(2) as well.
Right. Those are rotations. When dealing with real numbers, one calls them
"orthogonal" matricies. The matricies in U(2) with real
entries constitute O(2).
If a=cos(t), then you have the rotations
[cos(t) -sin(t)]
[sin(t) cos(t)]
which are what you've got if you use the right value for t.
>Er ... but ..... well ..... I don't really see the connection.
>Why should I consider EM as a collection of rotations around a circle
>and *nothing else*?
>There must be some deeper reasoning here.
An electromagnetic field tells you how the phase of a charged particle
changes as you take it around a loop in space-time. A direct comparison of
phase at different points doesn't have meaning as such, but one can talk
about what happens if you
transport the phase from one place to another (along a path).
There's the Aharonov-Bohm effect, a gem. One can perform interference
experiments with electrons, as you probably
know. A beam is split up and recombined, with interference
fringes. Magnetic fields are places where, if you transport an
electron around them, it changes in phase. So put a magnetic
field through the middle of an interference experiment, and lo,
the fringes you get shift. The comparison between the phases of
the two beams gets offset by something, depending on how much
of a magnetic field there is. The thing which is amazing to many
people is that this occurs even though one has carefully
shielded the electron beams from being exposed directly to the
magnetic field.
Electric fields represent the change in phase as one traces out a path,
two of whose sides are fixed in space but trace out changes
in time. Say you have an electron sitting relatively still in an
electric field. This means that it has a low momentum. The difference in
phase between two bits of it, A and B, is relatively
small, where we take the phase of A and carry it along to B.
Assume the electric field is along the line between A and B. Now
take A and B forward in time a little ways to A' and B'. Close up
the loop by considering what happens to the phase of the left hand
bit if you carry it to the right hand bit.
Given that there is an electric field present (assume it is along the
line between the two bits), any given phase undergoes a shift if you take
it around the loop from A to B to B' to A' and back to A.
Likewise, given that the electron's phase at A and B match up along the
(spatial) path between them, when we carry those
phases forward to A' and B' respectively, they no longer match
up, comparing them using the path between A' and B'.
The upshot of this is that the formerly low-momentum electron has now
acquired a momentum from the electric field.
One can see also how magnetic fields interact with charged particles. If
an electron is moving along, and runs into a region
with a magnetic field crossing its path, the phase of one side of
the electron gets shifted relative to the phase of the other side,
and hence it acquires a momentum perpendicular to its original
direction of travel.
This is all another instance of that concept of "connection" which was
discussed in relation with general relativity. There, one was
transporting vectors and co-vectors around loops. Here, one transports
phases around loops.
> So a 2x2 complex matrix is presumably
>expressable as a 3x3 real matrix?
4x4. Each entry may be replaced by a 2x2 chunk.
The space C^n is said to have n "complex dimensions". But since each
complex number is in effect two real numbers, it has a correspondence to
R^(2n), the space where we have 2n real coordinates. So C^n is also said
to have 2n real dimensions.
An nxn complex matrix acts on C^n. A 2nx2n real matrix acts on R^(2n). One
can think of them as the same.
>>So U(1) is isomorphic to O(2). A profound and basic fact.
He meant, U(1) is isomorphic to SO(2).
>Presumably 'isomorphic' means each element can be matched one for one
>using the same 'conversion'?
And the multiplication corresponds too. To a+bi we associate
[a b]
[-b a]
(or maybe we want the minus sign on the other b-- both
ways give an isomorphism but one or the other is more
conventional). If (a+bi)(c+di)=(ac-bd)+(ad+bc)i, then for
this to be a homomorphism, one needs
[a b][c d]=[ac-bd ad+bc]
[-b a][-d c] [-ad-bc ac-bd]
which is true. For a homomorphism to be an isomorphism,
it has to be a 1-1 correspondence between the two
groups.
>No, no, no. You can't stop there, it's immoral.
>
>Having just whetted my appetite, got some thought processes (however
>elementary) going, you stop! AAaarrrggghhh!!!
>
>I suspect that U(1) may be too simple to bring out the points of how and
>why these SU(3)'s and so on are a good way of looking at things, but I
>just gotta have some idea how it all fits together.
Well, the fields which are described by SU(3) and SU(2) gauge field theory
are like the electromagnetic field, except that
instead of telling you how to change the phase of something
(described by U(1)) as you go from place to place in space-time, they tell
you how to change something which is acted upon by SU(3) or SU(2) as one
goes from place to place.
This last remark may seem even more obscure than the rest of the story,
but maybe you can get some feel for what I mean. A linear "action" of a
group is an operation [z1 z2... zn].g where g is an element of the group
and z1,...,zn are complex numbers, which
is linear in the z1,...,zn for each g.
When the group is U(1), which is essentially a circle, there isn't
all that much variety in actions. Or even if the group is abelian.
It can be "decomposed" into actions which are just like shifting the
phase of a complex number.
The "charge" can be some multiple of the unit charge. This means that as
one goes around the circle in U(1), the phase of the particle goes around
n times, where n is the charge. If the charge is 2, the
particle gets twice as big a kick out of electric fields, and is
deflected by twice as much by magnetic fields. If the charge is 0, it
"ignores" them. If the charge is -1, it gets affected the opposite way.
This is all the variety in actions on a single complex number, though.
It has to do with the fact that acting by g and then h is the same as
acting by h and then g. This simplifies everything.
On the other hand, SU(2) and SU(3) can "act on" particles in varied
ways. The arrangement of particles (the "eight fold way") was noticed
before the gauge field explanation was offered. This time, the arrangement
is more complicated than just having a number line along which the
particles can be placed. This gave clues as to what might be going on. The
linear "actions" of a group are known as representations, and seeing
something resembling the space of
representations of a group is like getting its footprints. I remember
reading the story of some physicist who wanted to know the
representations of a group, not knowing that these had been extensively
studied by mathematicians, and was pleased and
surprised to find that there was a reference book already on the
shelf with what he needed.
If you take a phase-like thing x around a loop, you get back x.g for
some element g of the group. Now suppose you take x.h around the loop
instead. Because the entire situation is symmetrical under action by the
group, this has to come back as (x.g).h=x.gh. This,
however, is not necessarily the same as (x.h).g=x.hg, unless the
group is commutative. So the question "what do you get multiplied
by when you go around a loop" isn't the right question anymore.
One could say in U(1) electromagnetism "I get shifted by 30
degrees in phase", but here you might get acted upon by g
(as x does) or by h^(-1)gh, which is what is needed to take x.h to x.gh:
(x.h)h^(-1)gh=x.gh. This is such as to make the analysis
more difficult.
I would be very surprised if a message like this were enough to explain
gauge field theory to someone, especially coming from someone who does not
himself specialize in it, but it's interesting
at least to try.
Keith Ramsay
------
I might add that "does not specialize" is rather an understatement.
I'll be happy if it just makes sense.
>OK, right. Is it a necessary and sufficient (gosh my old math teacher
>will be pleased this one surfaced after 30 yrs) condition that the
>adjoint should be the inverse for U(n)?
Yes indeed. I'd already given this as the definition of U(n):
>>Now, we say that a complex matrix is unitary if TT* and T*T are
>>both the identity matrix. Actually, for n x n matrices when n is
>>finite (the only case we're worrying about here!), if TT* = 1 then
>>T*T = 1 as well, so we can just say:
>
>>T is unitary if TT* = 1. (T* being the adjoint of T)
Of course, I didn't say that this condition was *necessary* for T to be
unitary, so if you didn't know I was giving you the *definition* of
unitary, you might have thought I was being coy and merely giving you
a *sufficient* condition! Back when I was even more of a nitpicker
than I am now, I used to be annoyed at how textbooks always do this:
they say
We define a matrix to be unitary if TT* = 1.
instead of
We define a matrix to be unitary if and only if TT* = 1.
which is a tiny bit more accurate, since it serves as a promise
that there is not a little footnote way back on page 248 where they
say "Oh yeah, and by the way, we ALSO define a matrix to be unitary
if it is a 52x78 matrix with every entry being 133".
However, I never saw any footnotes like that, so I eventually learned
to stop worrying about it.
----------------------------------------------------------------------
Okay, let's see, we were talking about U(2) and you guessed a
possible form for matrices in U(2):
>>>Try something more general.
>>>
>>>[a+bi -x-yi]
>>>[x+yi a+bi] looks general enough.
>>
>>My gosh, Oz, how did you dream up that special form? If I didn't know
>>better I'd guess you were reading books about Pauli matrices!
>
>Hahahahaha!! What a lovely idea.
>However it was me being mindless yet again if you chop out some
>intervening text as I have above.
Actually it was far from mindless; you almost reinvented the Pauli
matrices here, which are the standard way of working with the group
SU(2). Good work! You just got a couple of signs twisted --- that's
the sort of thing one straightens out only by messing around a bit.
So then I fixed the signs... and we saw that
[a+bi -c+di]
[c+di a-bi]
is in U(2) if and only if a^2 + b^2 + c^2 + d^2 = 1.
In short, what we've got here is just S^3, the unit sphere in
4 dimensions!
>Ah, how very convenient. Four dimensions is a tad tricky to deal with,
>but here we can do it with a 2D complex 2x2 matrix. Heck, you can even
>write it on 2D paper. I can see some potential utility for this
>structure.
Indeed. The above matrix can also be thought of as a unit quaternion.
It only takes a single quaternion to capture 4 dimensions! However,
quaternions had gone out of fashion by the time Pauli needed them, which
is why he used certain 2x2 complex matrices.
Anyway, no matter what formalism we use, what we've got here is S^3.
>Mind you, why not S^4 since it has 4 dimensions?
>Glory me, you aren't going to have one dimension 'special' are you?
It's called S^3 because the surface of the sphere itself is 3-dimensional,
one less dimension than the 4-dimensional space its sitting in. Lots of
times mathematicians want to work with S^3 abstractly, not thinking of it
as embedded in some ambient space of higher dimension. It's a 3-dimensional
manifold... so we call it S^3.
---------------------------------------------------------------------------
Now, if we jumped to an erroneous conclusion we might guess that we've just
figured out how to describe *all* matrices in U(2).
>>However, not all matrices in U(2) are of the form
>>
>>[a+bi -x+yi]
>>[x+yi a-bi]
>>
>>where a^2 + b^2 + x^2 + x^2 = 1. To see why, work out the
>>determinant of this matrix.
>
>Determinants! I've only ever done two in my life. I need a book ...
>Ah, my son's GCSE maths book gives a crib. OK let's go.
>
>det= (a+bi)(a-bi) + (x-yi)(x+yi) = a^2 + b^2 + x^2 + y^2 = 1 (see above)
>
>OK so this group (unproven, but there you go) has determinant = 1.
Great! So: we are only getting matrices with DETERMINANT = 1. Is
this all of U(2)? Surely not! (See below.)
>Now, where have I heard that before?
Well, *one* place you have heard this before is when I told you the
definition of SU(n). Could you remind us of that definition?
>I think it would help if I could find a member of U(2) that wasn't in
>S^3, for some reason I just can't find an appropriate structure.
>
>Anyway, it's plausible (likely, even) that there are elements in U(2)
>that aren't in S^3.
Indeed. Can you find a matrix of the form
[z 0]
[0 z]
for some complex number z, which is unitary but does not have
determinant 1?
-------------------------------------------------------------------------
Okay, now to the physics applications of all of this stuff:
>>>>Electromagnetism is all about the circle. Cool, huh?
>>>
>>>Er ... but ..... well ..... I don't really see the connection.
>>
>>The *connection*? Ha-ha! Great pun! How devilishly sneaky of you,
>>Oz.
>
>Ah.
>I get the impression I said something smart.
>I would feel ever so much better if I knew what it was.
>The only connection (mathematically) I have come across was in the GR
>Tut, but this was handed down from on high and The Wiz positively
>refused to explain how he got it. Turned me into an intestinal parasite
>for looking, if I remember correctly.
Well, once upon a time in that thread, when we were talking
about curvature, Ed Green wrote:
"I see there already *is* a tiny analogue to Stokes theorem floating
around here, since the discrepancy here depends on the *area* of the
path, epsilon^2 . The Riemann curvature tensor is thus some analogue
(generalization?) of the curl of a vector field."
and I replied:
"Very, very, very very smart observation.
This observation is at the basis of all our present-day theories of
forces: Maxwell's equations for electromagnetism, Einstein's theory of
general relativity, and the Yang-Mills equations for the electroweak and
strong forces. We say they are all "gauge theories". What this means
is that the basic field involved in any of these forces is a
"connection" which describes what happens to particles when you move
them along a path. Various internal degrees of freedom get "parallel
transported". When you take them around a loop they don't come back as
they were. When you study this infinitesimally, you get a "curvature
tensor" describing parallel translation around infinitesimal
parallelograms --- of which the Riemann curvature is an example. There
is a formula for this curvature tensor as a kind of "curl" of the
connection. In the case of magnetostatics, this takes the simple form:
B = curl A
That is, the magnetic field is the curl of the vector potential.
In the case of electromagnetism in 4d spacetime we have the same sort of
thing:
F = dA
where the electromagnetic field F is the curvature and the (4d) vector
potential A is the connection. Here d is a 4d analog of the curl,
called the "exterior derivative".
In Yang-Mills theory and gravity we have the same sort of thing, only
somewhat fancier. Not too surprising, in a way, since Einstein, Yang
and Mills were all deliberately trying to copy the shining example of
Maxwell's equations.
Now you might ask: in general relativity, when a parallel translate a
tangent vector around a loop, its actual direction changes. But in
electromagnetism, if I carry a charged particle around a loop, what
changes?
Its phase!"
And now you know that its phase is an element of U(1), so all the
pieces should start falling into place: groups, connections, parallel
transport, curvature, and the idea that the curvature is a kind of
"curl" of the connection. Once you've got all these ingredients in
place, gauge theory is easy, and Maxwell's equations are the simplest
of all gauge theories.
You have already worked with that big scary formula for the curvature
as a kind of "curl" of the connection, back in the general relativity
tutorial. Now what you need is to learn a bit more group theory and
geometry, to understand the simple ideas lurking behind that scary
formula.
>How on earth does Maxwell come out of a group U(1)?
>It's just not plausible.
That's the most beautiful thing about physics: something which starts
out seeming outrageous and bizarre seems perfectly obvious and natural
once you understand it.
>I hope this thread gets at lest that far in the fullness of time.
It will, if we keep working at it...
I want to see how Maxwell's equations "pop out". It's good practice for
you! What was that thing that Feynman (I think) said? Something like, "If
you can't explain it with tables, chairs and beer mugs, you don't
understand it." (Maybe I should look up that quote.)
The "tables, chairs and beer mugs" quote is from Hilbert, and made a
very different point. Hilbert was discussing geometry as a purely
formal axiomatic system. It should be possible (Hilbert said) to
replace the terms "point, line, plane" in the axioms and theorems of
Euclidean geometry with "table, chair, beer-mug", and still have a
valid logical argument. Moreover, this substitution would help reveal
logical gaps, gaps that visual intuition might cover over with the
usual terminology.
Sorry.
Note to Oz: geometrically speaking, O(n) is the group of all
rotations, reflections, and products thereof in R^n. SO(n) is
the group of rotations in R^n --- but *not* reflections.
As we've seen, U(1) consists of all rotations in the complex
plane --- but *not* reflections. If we think of this plane as
R^2, we thus see that U(1) is the same as SO(2). To get O(2),
we'd have to throw in reflections. A typical sort of reflection
in the complex plane is the operation of complex conjugation, which
takes x+iy to x-iy, thus amounting to a reflection across the x
axis.
More algebraically speaking, remember that "S" in front of O(n)
or U(n) means "with determinant 1". A reflection has determinant
-1, so SO(2) doesn't include reflections.
>Could someone please explain in a few words what 'gravito-magnetism'
>should be? Obviously GR demands it to be there...
>From "week80":
Ignazio Ciufolini and John Archibald Wheeler, Gravitation and Inertia,
Princeton University Press, 1995.
This book by Ciufolini and Wheeler is full of interesting stuff, but it
concentrates on "gravitomagnetism": the tendency, predicted by general
relativity, for a massive spinning body to apply a torque to nearby objects.
This is related to Mach's old idea that just as spinning a bucket pulls
the water in it up to the edges, thanks to the centrifugal force, the
same thing should happen if instead we make lots of stars rotate around
the bucket! Einstein's theory of general relativity was inspired by Mach,
but there has been a long-running debate over whether general relativity is
"truly Machian" --- in part because nobody knows what "truly Machian" means.
In any event, Ciufolini and Wheeler argue that gravitomagnetism exhibits
the Machian nature of general relativity, and they give a very nice
tour of gravitomagnetic effects.
That is fine in theory. However, the gravitomagnetic effect has never yet
been observed! It was supposed to be tested by Gravity Probe B, a satellite
flying at an altitude of about 650 kilometers, containing a superconducting
gyroscope that should precess at a rate of 42 milliarcseconds per year
thanks to gravitomagnetism. I don't know what ever happened with this,
though: the following web page says "Gravity Probe B is expected to fly in
1995", but now it's 1996, right? Maybe someone can clue me in to the latest
news.... I seem to remember some arguments about funding the program.
Gravity Probe B, http://stugyro.stanford.edu/RELATIVITY/GPB/
[After I posted "week80", the discussion made it clear that budget
cuts are still holding up the launch of Gravity Probe B. Check out
the website, though - it's a pretty cool experiment!]
Oops! ... I really confused things there, didn't I? Well, I learnt
something in the end. ... Or did I? What Hilbert said doesn't *really*
make sense does it? I mean, it doesn't really make sense to replace point,
line, plane with table, chair, beer mug - does it? I'd like to understand
this, it sounds like it could be important, or at least useful.
Anyone gonna pick it up? Michael?
>The modern definition of a vector space first appears in [Weyl's]
>book "Space, Time, Matter"! [...]
Actually, Peano had given it back in the 1870s, but almost nobody noticed.
Peano himself never referred to it again.
Weyl's definition was also ignored at the time. A few years later, Banach,
Hahn, and Wiener independently gave the modern notion, and put it to work,
and this time people noticed.
--
-Matthew P Wiener (wee...@sagi.wistar.upenn.edu)
Ooops. Now I'm for it.
>=================================
>
>>Um. Let's play with this a bit. So just sticking with reals (which are
>>complex really, just simple ones), after an absurd amount of playing
>>about I eventually find that
>>
>>[a -(1-a^2)^(.5)] 0<a=<1
>>[ ]
>>[(1-a^2)^(.5) a ] are a whole bunch in U(2) as well.
>
>Right. Those are rotations. When dealing with real numbers, one calls them
>"orthogonal" matricies. The matricies in U(2) with real
>entries constitute O(2).
>
>If a=cos(t), then you have the rotations
>
>[cos(t) -sin(t)]
>[sin(t) cos(t)]
>
>which are what you've got if you use the right value for t.
Of course. A timely reminder.
>>Presumably 'isomorphic' means each element can be matched one for one
>>using the same 'conversion'?
>
>And the multiplication corresponds too. To a+bi we associate
>
>[a b]
>[-b a]
Ok
>
>If (a+bi)(c+di)=(ac-bd)+(ad+bc)i, then for
>this to be a homomorphism, one needs
>
>[a b][c d]=[ac-bd ad+bc]
>[-b a][-d c] [-ad-bc ac-bd]
>
>which is true. For a homomorphism to be an isomorphism,
>it has to be a 1-1 correspondence between the two
>groups.
Since these terms are bandied about so much I suppose I ought to try to
get them straight. Actually, it would be a small miracle if I remember
them, but I know I ought to.
>Well, the fields which are described by SU(3) and SU(2) gauge field theory
>are like the electromagnetic field, except that
>instead of telling you how to change the phase of something
>(described by U(1)) as you go from place to place in space-time, they tell
>you how to change something which is acted upon by SU(3) or SU(2) as one
>goes from place to place.
>
>This last remark may seem even more obscure than the rest of the story,
Indeed true at this stage. Unfortunately.
I have snipped the explanation because I don't think I am ready to
follow it properly. However it has given a sight of the summit and a
glimmer of things to come. I am sure that when I hopefully get there,
the pre-exposure will allow me to follow very much better that I would
otherwise. I feel that there is an early initial step that I am missing.
Actually on reflection, there are probably a lot of initial steps I am
missing. :-)
Actually this sounds more pessimistic than it should. I did follow the
outline, OK. I think .....
>I would be very surprised if a message like this were enough to explain
>gauge field theory to someone,
Who knows nothing about it? A fair comment. :)
But you have given an inkling, which is most useful right now.
>especially coming from someone who does not
>himself specialize in it, but it's interesting
>at least to try.
I hope so.
Well, I'll have a shot at this by way of an example, not geometry but in
the spirit of Hilbert's point. Below I give a certain theory: Q, expressed
in predicate logic.
This does not go the whole hog, as I've used sensible mnemonics
for the logical symbols and equality:
Ax is "for all x"
Ex is "there exists x"
-> is logical implication
= is equality
<> is non-equality
x and y are variables.
parentheses have their usual meaning.
The theory comprises the four non-logical symbols:
table: a name (or zero place function symbol)
chair: a one place function symbol
beermug, beermat: two place function symbols
and the following seven axioms:
A1. AxAy (chair x = chair y) -> x = y
A2. Ax table <> chair table
A3. Ax (x <> table -> Ey x = chair y)
A4. Ax beermat (x, table) = x
A5. AxAy beermat (x, chair y) = chair beermat (x,y)
A6. Ax beermug (x, table) = table
A7. AxAy beermug (x, chair y) = beermat (beermug (x,y), x)
Suppose we wish to determine whether the following sentences
are theorems of Q:
S1. Ax x <> chair x
S2. AxAy beermat (x, table) = beermat (table, x)
A sentence is a theorem of Q if it can be derived using the
axioms of Q and the rules of predicate calculus. As I understand
it, part of Hilbert's point was that if we use mnemonic symbols
for table, chair, etc. we might come up with an invalid argument
that looks OK because our intuition leads us to overlook a missing
or incorrect step as Michael Weiss pointed out.
The example above sort of illustrates this. If the four non-logical
symbols are replaced by the usual symbols, then one might be tempted
to assume that S1 and S2 are true (I'm sure I'm not the only person
who was suprised to find that they are false).
For anyone who wants to show that S1 and S2 are false, but doesn't
know how to set about it, one way is to find a model (an algebra)
that satisfies A1-A7, but not S1 and S2. If there's any interest
I'll post a solution in a few days if no-one else does (although
having written all this, I'm starting to feel it's not really a
proper spr topic...)
On the firing squad problem:
The solution of sending signals at relative speeds 3:1 does indeed
go through when worked out in detail. Other relative rates can also be
made to work -- with more difficulty and/or more signals. However,
as far as I recall (this was 15 years ago) the person who introduced
me to this problem suggested that there was a different kind of
solution. I certainly couldn't find one.
> In any event, Ciufolini and Wheeler argue that gravitomagnetism
> exhibits
> the Machian nature of general relativity, and they give a very nice
> tour of gravitomagnetic effects.
What do you think about the papers of Ning Li and D.G. Torr concerning
gravitomagnetic and gravitoelectric field generation in a rotating
superconductor exposed to a changing magnetic field? Dr. Li is
currently testing these predictions in a joint Marshall Space Flight
Center / University of Alabama Huntsville experiment ostensibly meant to
reproduce the Podkletnov gravity modification experiment. See my web
sitehttp://www.inetarena.com/~noetic/pls/gravity.html
for details.
> That is fine in theory. However, the gravitomagnetic effect has never
> yet
> been observed! It was supposed to be tested by Gravity Probe B, a
> satellite
> flying at an altitude of about 650 kilometers, containing a
> superconducting
> gyroscope that should precess at a rate of 42 milliarcseconds per year
>
> thanks to gravitomagnetism. I don't know what ever happened with this,
>
> though: the following web page says "Gravity Probe B is expected to
> fly in
> 1995", but now it's 1996, right? Maybe someone can clue me in to the
> latest
> news.... I seem to remember some arguments about funding the program.
>
> Gravity Probe B, http://stugyro.stanford.edu/RELATIVITY/GPB/
>
> [After I posted "week80", the discussion made it clear that budget
> cuts are still holding up the launch of Gravity Probe B. Check out
> the website, though - it's a pretty cool experiment!]
Sorry if I missed this discussion, but at the following web
site:http://cspara.uah.edu/www/research/gravity.htmlx#GPB
they mention a launch date of the year 2000. (they also discuss the
gravity modification experiment)
-Pete Skeggs
(remove -nojunkmail- from my address to reply to me)
>> In any event, Ciufolini and Wheeler argue that gravitomagnetism exhibits
>> the Machian nature of general relativity, and they give a very nice
>> tour of gravitomagnetic effects.
>What do you think about the papers of Ning Li and D.G. Torr concerning
>gravitomagnetic and gravitoelectric field generation in a rotating
>superconductor exposed to a changing magnetic field?
I don't know; I haven't read them. All I know is that any
*measurable* effect of this sort must have nothing to do with the
gravitomagnetic effect predicted by general relativity.
The gravitomagnetic effect predicted by general relativity is not
produced by magnetic fields; it is produced by a spinning mass. It is
called "gravitomagnetic" because it is mathematically analogous
to the magnetic field produced by a current loop.
Similarly, the "gravitoelectric effect" predicted by general relativity
is the ordinary gravitational field produced by a static mass,
analogous to the ordinary electric field produced by a static charge.
Actually, people rarely call this the "gravitoelectric effect" ---
usually they just call it gravity!
As I explained in my last post, general relativity predicts that the
rotation of the earth produces a barely detectable gravitomagnetic
effect, capable of making a gyroscope rotate only 42 milliarcseconds per
year. The gravitomagnetic effect produced by rotating object in
in somebody's lab would be far smaller still!
Re Gravity Probe B:
>Sorry if I missed this discussion, but at the following web
>site:http://cspara.uah.edu/www/research/gravity.htmlx#GPB
>they mention a launch date of the year 2000.
That would be nice! I know that most NSF funding for research in
gravity is going to be directed towards LIGO (the gravitational wave
observatory) and away from other efforts, both theoretical and
experimental. I don't know if that will affect Gravity Probe B.
> >I want to see how Maxwell's equations "pop out". It's good practice for
> >you!
>
> Actually, I got a lot of practice when I wrote a book on this subject,
> called "Gauge Fields, Knots and Gravity". If you're in a hurry to see
> Maxwell's equations pop out of U(1) gauge theory, you can just read
> that!
Well, I'm reading that book at the moment. I'm only as far as ch5 though.
I had to get it through interlibrary loan and they don't give you much
time to read much, and I'm too skint to buy the book. D'ya know where I
can get a cheap copy?
> Of course, one can summarize it in a single sentence: "The curvature F
> of a U(1) connection is the electromagnetic field, and if we take the
> Lagrangian to be tr(F^2), we get Maxwell's equations."
Is that the same F as in
F = B + E^dt,
where ^ is the wedge operator thingy, not the "to the power of" operator?
I guess this must be incorrect because I don't see how to get Maxwell's
equations from this.
> You mention Feynman saying that you only really understand something
> if you can explain it by moving beer mugs around on the table and that
> sort of thing. If you want to see how he explains quantum
> electrodynamics in terms of U(1), see his book "QED: Strange Theory of
> Light and Matter". If you pay careful attention, you'll see he is
> always talking about circles and complex numbers and phases and
> things, which is his way of talking about U(1). A phase, or unit
> complex number, is the same as an element of U(1).
Oh yeah! That's one of my all-time fave books. Now your insistence that
elements of U(1) are just the phase makes more sense to me.
> Okay, so: move your particle around a little square in the xy plane.
> Say the sides of the square are length epsilon. Its phase gets multiplied
> by approximately
>
> exp(ik epsilon^2)
>
> where the number c depends on the electromagnetic field where your
I don't see any c above!
> little square is. What's that number k? It's the charge of your
> particle times the Z COMPONENT OF THE MAGNETIC FIELD!
>
> If we used a little square in yz plane we'd similarly get an answer
> involving the X COMPONENT OF THE MAGNETIC FIELD.
I don't *really* understand this but I can see some related things. I gave
the formula for the electromagnetic field, F above. The B and E are
B = B_x dy^dz + B_y dz^dx + B_z dx^dy,
E = E_x dx + E_y dy + E_z dz.
So I suppose we need to do something like integrate F inside our little
square. Now if our little square is in the yz plane then the only term
that is nonzero is
Integral B_x dy^dz,
and if our little square is small enough then we can treat B_x as
constant, so that
Integral B_x dy^dz ~ B_x epsilon**2.
where ** is the "to the power of" operator.
> What about if we move it around a little square in the xt plane,
> where t is the time axis? This is a little harder to do (I can
> explain how if you can't figure out how), but if you do it, you
> get the X COMPONENT OF THE ELECTRIC FIELD!
Well, a similar argument holds for the xt plane, but now
Integral E_x dx^dt ~ E_x epsilon**2.
How many marks out of 10 do I get? I strongly suspect that there's an
easier way of doing things than using differential forms.
> Now, I haven't explained where Maxwell's equations come from,
No, and I haven't worked it out either yet.
Feynman's quote is referring to the spin-statistics theorem. He
explains how a students in his freshman lecture asked him about the
proof of spin statistics. Feynman couldn't think of a way to easily
explain it, and remarked there that he realized that he must not really
understand spin-statistics, because "Anything should be reducable to
the level of a freshman lecture, or else we don't really understand it".
I believe this is in the intro to his "QED: The Strange Theory of Light and
Matter" book, in which he goes on to say that he did figure out a way
to explain QED, so that's what he was going to do. (of course, he later
did find a way to explain spin-statistics)
--------------------------------------------
Charles Bloom cbl...@mail.utexas.edu
http://wwwvms.utexas.edu/~cbloom/index.html
> Actually, Peano had given it back in the 1870s, but almost nobody noticed.
> Peano himself never referred to it again.
>
> Weyl's definition was also ignored at the time. A few years later, Banach,
> Hahn, and Wiener independently gave the modern notion, and put it to work,
> and this time people noticed.
I'd be interested in the Peano reference if you have it about. I'm
interested in the history of these things but don't know how Peano could
have accomplished such a thing before a large amount of the work on
vectors and operational calculus by Heaviside and also by Gauss.
-jeff
[Moderator's note: We don't seem to be discussing physics anymore.
Followups set to sci.math. -TB]
In article <5rj11q$i...@agate.berkeley.edu>, John Baez
<ba...@math.mit.edu> writes
>In article <5r8e81$3...@agate.berkeley.edu>,
>Oz <O...@upthorpe.demon.co.uk> wrote:
>
>It's called S^3 because the surface of the sphere itself is 3-dimensional,
>one less dimension than the 4-dimensional space its sitting in. Lots of
>times mathematicians want to work with S^3 abstractly, not thinking of it
>as embedded in some ambient space of higher dimension. It's a 3-dimensional
>manifold... so we call it S^3.
Hmmmm. I get the feeling one should just consider S^3 a bit before
proceeding further. Given the plethora of definitions that have spawned
themselves like dragon's teeth on this thread, all of which I will be
expected to remember (despite the fact that I have been known to forget
my own birthday), I ought to bang one into memory anyway.
So what does S^3 look like? Well, if we collect them into bunches of
different 4D radii (OK an infinite number of them, but no matter) and
just look at one of these, how can we represent it mentally?
Dunno. Hmmm. .... Oz ponders a while.
The most plausible physical description (without doing any math on them)
would appear to be a spherical balloon, at least if your path crosses
the center. It has zero radius until our viewpoint in time enters the 4D
sphere then we would see the balloon inflate until time hits the (time)
center of the sphere, when the balloon's radius would equal the radius
of the 4D sphere, after which it would deflate to zero again. I am
mostly convinced that a path that didn't go through the center just
looks like a balloon that doesn't expand to the full radius. So I guess
the whole group looks like an infinite number of inflating balloons of
different radii (to inf) inflating and deflating all over space. Hang
on, they all have radius one, so you get to see spheres of max size zero
to one (in 3D) depending on the location of their origins.
Is this about right?
>---------------------------------------------------------------------------
>
>Now, if we jumped to an erroneous conclusion we might guess that we've just
>figured out how to describe *all* matrices in U(2).
>
>
>Great! So: we are only getting matrices with DETERMINANT = 1. Is
>this all of U(2)? Surely not! (See below.)
>
>>Now, where have I heard that before?
>
>Well, *one* place you have heard this before is when I told you the
>definition of SU(n). Could you remind us of that definition?
=======
JB said:
SU(n) - All n x n unitary complex matrices with determinant 1.
The "special unitary group".
=======
Ooooh. So I have only managed to find members of a sub-group, to whit
SU(2). However, this was indeed one I was interested in.
>[z 0]
>[0 z]
>
>for some complex number z, which is unitary but does not have
>determinant 1?
Dang it. Homework again. Couldn't find one before, but I had better try
harder this time. Anyway, I was thinking of a more general form.
Unh.
Another elementary problem, so simple you didn't even think of it. My
son's GCSE book gives a method for determining the determinant for real
numbers, but no comment about complex ones. I'll have to look into this.
Hmmm. I presume the definition of det is A.det.A^(-1)=[1]
.... some time later I arrive at
[a b] _1_ [ a -b] [1 0]
[c d] . det . [-c d] = [0 1] which was worth doing.
So det has the same form for complex numbers. (ie ad-bc)
[1 0]
So given the form requested above only [0 1] has det=1.
And to be a member of U(2), z = a + i(1-a^2)^(0.5) and there should be
an infinity of these. I also note that *no* matrix of form [a 0] a=/=b
[0 b]
can be in U(2). Hmmm, this is fun.
Ahh, there's another bunch [0 z] with z as above in U(2).
(det=1 iff z=i) [z 0]
Now, how about a form [a b] ? a=u+iv, b=w+ix, u,v,w,x different.
[b a]
After a period of my usual thrashing about I think these are not in
U(2). Assuming they are produces a contradiction........ I think. The
contradiction can be removed if at least u=0 or w=0, *and* at least v=0
or x=0.
So I seem to have figured out that at least some of U(2) comprises:
where a^2 + b^2 = 1 (clearly a,b can be negative)
1 2 3 4 5 (found earlier) <<<TYPE
[0 a+bi] [a+bi 0] [ai b] [a bi] [u+vi -x+yi]
[a+bi 0], [0 a+bi], [b ai], [bi a], [x+yi u-vi]
(u^2+v^2+x^2+y^2=1)
Now I did look at asymmetric matrices, but quite franky it got messy.
Very messy. I have a gut feeling that there may be very few in U(2),
probably none. Note that 'very few' may well be infinite, but less
infinite than these simpler ones. I await expert help here.
Now, those with det=1:
Type 1 only if a=0, b=-+1.
Type 2 only if a=1, b=0.
Type 3 none
Type 4 all of them. Det= a^2+b^2.
Type 5 all of them. Det= a^2+b^2. Much more interesting.
It's not hard to see that the cases with det=1 of types 1 to 4 are also
special cases of type 5. The question is whether type 5 is the entire
group SU(2)?
>-------------------------------------------------------------------------
I think if I pursue the next part of this thread at this stage, it's
going to get confusing. So really, it's just a preview. Me, I'm still on
the simple problems as above.
>
>Okay, now to the physics applications of all of this stuff:
>
>Well, once upon a time in that [GR] thread, when we were talking
I think we ought, eventually, to go into this. Of course it would help
if I could remember my grads, div's and curls. I seem to remember they
were pretty straightforward, unfortunately they aren't in my son's GCSE
mathsbook. (!!Cough, splutter). In fact now I actually think about it I
had better go into the cellar and find an old maths book that isn't too
covered by mildew. Somehow, I get the feeling that a sketchy knowledge
might be essential. (JB throws up his arms in horror at the enormity of
the job ahead.)
>In Yang-Mills theory and gravity we have the same sort of thing, only
>somewhat fancier. Not too surprising, in a way, since Einstein, Yang
>and Mills were all deliberately trying to copy the shining example of
>Maxwell's equations.
>
>Now you might ask: in general relativity, when a parallel translate a
>tangent vector around a loop, its actual direction changes. But in
>electromagnetism, if I carry a charged particle around a loop, what
>changes?
>
>Its phase!"
Ah, the question is, the phase of what?
Do I feel some QED seeping from the woodwork?
>And now you know that its phase is an element of U(1), so all the
>pieces should start falling into place: groups, connections, parallel
>transport, curvature, and the idea that the curvature is a kind of
>"curl" of the connection. Once you've got all these ingredients in
>place, gauge theory is easy, and Maxwell's equations are the simplest
>of all gauge theories.
Better start there then, I think. Currently these don't seem to be happy
bedfellow to me.
>You have already worked with that big scary formula for the curvature
>as a kind of "curl" of the connection, back in the general relativity
>tutorial.
Oh yes, all those fun tensors.
I expect I have already forgotten my g_xy 's. :-(
>Now what you need is to learn a bit more group theory and
>geometry, to understand the simple ideas lurking behind that scary
>formula.
This is true. I suspect it might be best to start at the beginning and
plod steadily through to the end in a semblance of a structured way.
>If we used a little square in yt plane we'd similarly get an answer
>involving the Y COMPONENT OF THE MAGNETIC FIELD.
>If we used a little square in zt plane we'd similarly get an answer
>involving the Z COMPONENT OF THE MAGNETIC FIELD.
These should be electric field.
-- Toby
to...@ugcs.caltech.edu
[Moderator's note: the perils of cut-and-paste! Thanks. - jb]
: That is fine in theory. However, the gravitomagnetic effect has never yet
: been observed!
General Relativity and Quantum Cosmology, abstract gr-qc/9704065
From: Ignazio Ciufolini <ciu...@nero.ing.uniroma1.it>
Date: Wed, 23 Apr 97 20:51:55 METDST
Detection of Lense-Thirring Effect Due to Earth's Spin
Authors: I. Ciufolini, D. Lucchesi, F. Vespe, F. Chieppa
Comments: 9 pages, Latex version (detection.tex). Only fig. 4
included. This paper was submitted for publication in Nature
in April 1996 and resubmitted to Nature in March 1997
Rotation of a body, according to Einstein's theory of general
relativity, generates a "force" on other matter; in Newton's
gravitational theory only the mass of a body produces a force.
This phenomenon, due to currents of mass, is known as gravitomagnetism
owing to its formal analogies with magnetism due to currents of
electric charge. Therefore, according to general relativity,
Earth's rotation should influence the motion of its orbiting
satellites. Indeed, we analysed the laser ranging observations
of the orbits of the satellites LAGEOS and LAGEOS II, using a
program developed at NASA/GSFC, and obtained the first direct
measurement of the gravitomagnetic orbital perturbation due to
the Earth's rotation, known as the Lense-Thirring effect.
The accuracy of our measurement is about 25%.
The URL for the above abstract and a link to the paper is:
http://xxx.lanl.gov/abs/gr-qc/9704065
Also, see the 1994 article "Spin Dynamics of the LAGEOS
Satellite in Support of a Measurement of the Earth's
Gravitomagnetism" by Salman Habib, Daniel E. Holz, Arkady Kheyfets,
Richard A. Matzner, Warner A. Miller and Brian W. Tolman at
URL:
http://xxx.lanl.gov/abs/gr-qc/9406032
with Journal reference: Phys. Rev. D50 (1994) 6068-6079
> ...I had better go into the cellar and find an old maths book that isn't
> too covered by mildew. Somehow, I get the feeling that a sketchy
> knowledge might be essential.
...
>> Now what you need is to learn a bit more group theory and
>> geometry, to understand the simple ideas lurking behind that scary
>> formula.
>
> This is true. I suspect it might be best to start at the beginning and
> plod steadily through to the end in a semblance of a structured way.
Someone somewhere* recommended the book "Encyclopedic Dictionary of
Mathematics, second edition" by the Mathematics Society of Japan. I
webbed right over to Amazon.com and for $70 US had it sent out (was a
special order). It just arrived yesterday, and already it has paid for
itself. Have you every lurked through "This weeks finds" and wondered
what a category was? This book has helped me sense what the players
are. (And a few frequent readers of this newsgroup might suspect that I
might hope to apply category theory to quaternions someday, and they would
be right :-) You have to read about a dozen definitions before they start
to hang together, but this book is concise and logical.
Getting the definition right on matters. In a different thread, I said I
needed to show two groups were "isomorphic". After thinking and reading
quite a few definitions, I think the right word is "homomorphic." Have
got to chew on that some more. For folks who want to follow the abstract
math that shows up in this newsgroup, the "EDM" is a useful investment.
Doug Sweetser
http://world.std.com/~sweetser
>Below I give a certain theory: Q, expressed in predicate logic.
>A2. Ax table <> chair table
I presume that was meant to be: A2. Ax table <> chair x
>and the following seven axioms:
OK - nice example, may I convert to more usual notation?
A1. AxAy (sx = sy) -> x = y
A2. Ax 0 <> sx
A3. Ax (x <> 0 -> Ey x = sy)
A4. Ax A(x, 0) = x
A5. AxAy A(x, sy) = sA(x,y)
A6. Ax M(x, 0) = 0
A7. AxAy M(x, sy) = A(M(x,y), x)
Is this Robinson arithmetic? Or just a subset of it?
Yes, without induction, one can usually add little extras (not the whole
usual "nonstandards" schmear!) to the standard naturals; no problems.
>Suppose we wish to determine whether the following sentences
>are theorems of Q:
>
>S1. Ax x <> sx
>S2. AxAy A(x, 0) = A(0, x)
>For anyone who wants to show that S1 and S2 are false, but doesn't
>know how to set about it, one way is to find a model (an algebra)
>that satisfies A1-A7, but not S1 and S2.
Well let's have a go.
Suppose we adjoin an infinite-looking element to the usual naturals.
Put s oo = oo , A(oo, n) = oo = A(oo,oo) = A(n,oo),
M(oo, 0) = 0 , M(oo, n) = oo = M(n, oo) = M(oo,oo).
I think that satisfies A1-7 but fails S1.
For S2 we could adjoin an extra infinity, w, satisfying...
s oo = w, sw = oo,
and where necessary, A( , ) = rightmost infinity, M( , ) = leftmost inf.
That fails S2. I suppose you could fail both at once with 3 inf-adjoins.
Hope there's no boo-boos there!
------
[Moderator's note: further discussion of formal logic might better
be directed to a newsgroup other than sci.physics.research. - jb]
------
>On the firing squad problem:
>The solution of sending signals at relative speeds 3:1 ...
>as far as I recall (this was 15 years ago) the person who introduced
>me to this problem suggested that there was a different kind of
>solution. I certainly couldn't find one.
He may have been thinking of the following: Find a solution that
achieves all cells firing simultaneously at the *earliest* allowable
time. This cannot happen earlier than time (2n-2), (counting the zero
time as the state where all but one end are quiescent); as a little
thought will show.
In fact, this limit *can* be achieved.
I include a little hint beyond the spoiler...
[spoiler]
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
v
Try the effect of letting the end soldier send out successive signals
with speeds of 1, 1/3, 1/7, 1/15, indefinitely. Where do they meet the
first return signal? What should happen there? Can you implement it?
Have a sleepless night! :)
-------------------------------------------------------
Bill Taylor W.Ta...@math.canterbury.ac.nz
-------------------------------------------------------
>I think we ought, eventually, to go into this. Of course it would help
>if I could remember my grads, div's and curls. I seem to remember they
>were pretty straightforward, unfortunately they aren't in my son's GCSE
>mathsbook. (!!Cough, splutter). In fact now I actually think about it I
>had better go into the cellar and find an old maths book that isn't too
>covered by mildew. Somehow, I get the feeling that a sketchy knowledge
>might be essential. (JB throws up his arms in horror at the enormity of
>the job ahead.)
DIFFERENTIATION
Given a scalar field f, grad f is the vector field (@f/@x, @f/@y, @f/@z).
Given a vector field F = (m,n,o), curl f is the vector field
(@o/@y - @n/@z, @m/@z - @o/@x, @n/@x - @m/@y).
Given a vector field F = (m,n,o), div f is the scalar field @m/@x+@n/@y+@o/@z.
Now, there is a basic formula for scalar fields which states that
df = @f/@x dx + @f/@y dy + @f/@z dz.
So df is like the gradient of f.
Now suppose F is the covector field m dx + n dy + o dz,
which is just another way to look at the vector field (m,n,o).
Then dF = d(m dx) + d(n dy) + d(o dz).
Following the product rule d(fg) = df g + f dg
(at least when f is a scalar field),
dF = dm dx + m ddx + dn dy + n ddy + do dz + o ddz.
But dd = 0, a basic and fundamental theorem that can be understood intutively
in many interesting ways I won't describe now.
Therefore, dF = dm dx + dn dy + do dz
= @m/@x dx dx + @m/@y dy dx + @m/@z dz dx
+ @n/@x dx dy + @n/@y dy dy + @n/@z dz dy
+ @o/@x dx dz + @o/@y dy dz + @o/@z dz dz.
Because multiplication of differentials is antisymmetric (df dg = - dg df),
dF = (@o/@y - @n/@z) dy dz + (@m/@z - @o/@x) dz dx + (@n/@x - @m/@y) dx dy.
This corresponds to the curl of (m,n,o).
Now suppose F is the 2form m dy dz + n dz dx + o dx dy.
Then dF = dm dy dz + dn dz dx + do dx dy
= @m/@x dx dy dz + @n/@y dy dz dx + @o/@z dz dx dy
(all the other terms being 0 either because dd = 0 or because df df = 0).
So this is just like the divergence of (m,n,o).
INTEGRATION
0 dimensional integration is the evaluation of scalar fields at points.
For example, the integral of the scalar field f at the point p is f(p).
You might want to be funny and orient a point negatively;
then the value of the integral would be -f(p).
1 dimensional integration is the evaluation of covector fields along curves.
You can evaluate the vector field (m,n,o) along a curve using a line integral.
If F = (m,n,o), just integrate F . dr = m dx + n dy + o dz.
But this is just a covector field, so you can integrate it.
The direction along the curve matters;
changing this direction changes the sign of dr = (dx,dy,dz).
For example, the integral of x dx + z dy + y dz
along the line x = y in the xy plane from (0,0,0) to (1,1,0) is
int_0^1 x dx + int_0^1 z dy + int_0^0 y dz
= 1/2 + int_0^1 0 dy + 0 = 1/2.
2 dimensional integration is the evaluation of 2forms on surfaces.
You can evaluate the vector field (m,n,o) on a surface using a surface integral.
If F = (m,n,o), just integrate F . dS = m dy dz + n dz dx + o dx dy.
Here, dS = (dy dz, dz dx, dx dy) points in a direction normal to the surface.
The notation "dS" is historical; it's not the differential of any S.
In fact, dS is the cross product dr x dr / 2.
3 dimensional integration is the evaluation of 3forms in volumes.
You can evaluate the scalar field f in a volume using a volume integral.
dV = dr . dS / 3 = dr . dr x dr / 3!, and f dV is a 3form.
It makes sense to integrate all these forms over appropriate dimensions
even when working with manifolds of arbitrary dimension.
But you need a metric (dot product) to do line integrals,
and you need a metric in 3 dimensions for surface and volume integrals.
(Some of this can be generalized partially to other dimensions,
but you always need a metric to integrate vector fields.)
So the forms approach is better.
STOKES'S THEOREM
Suppose f is a scalar field.
The 2nd fundamental theorem of calculus for line integrals states that
the line integral of grad f along a curve from p to q is f(q) - f(p).
Let the boundary of a curve from p to q be q - p.
(That is, q is positively oriented and p is negatively oriented.)
Then the 2nd fundamental theorem is equivalent to saying
that the integral of df along a curve c is the integral of f at c's boundary.
Suppose F is a vector field.
The classical Stokes's theorem states that
the surface integral of curl F on a bounded surface S
is the line integral of F along the boundary of S.
Suppose F is now a covector field.
Stokes's theorem is equivalent to saying that
the integral of dF on S is the integral of F along the boundary of S.
Suppose F is a vector field.
Gauss's theorem states that the volume integral of div F in a bounded volume Q
is the surface integral of F on the boundary of Q.
Now suppose F is a 2form.
Gauss's theorem is equivalent to saying that
the integral of dF in Q in the integral of F on the boundary of Q.
In general, the generalized Stokes's theorem say that for any form F,
the integral of dF over an appropriate bounded region R
is the integral of F over the boundary of R.
This works in any dimension with any form.
-- Toby
to...@ugcs.caltech.edu
> Actually, I got a lot of practice when I wrote a book on this subject,
> called "Gauge Fields, Knots and Gravity". If you're in a hurry to see
> Maxwell's equations pop out of U(1) gauge theory, you can just read
> that!
>
> Of course, one can summarize it in a single sentence: "The curvature F
> of a U(1) connection is the electromagnetic field, and if we take the
> Lagrangian to be tr(F^2), we get Maxwell's equations."
Could you briefly explain:
(1) what a U(1) connection is?
(2) how to find it's curvature?
The obvious solution to this problem is to look up your book.
Unfortunately, MIT (your alma mater?) doesn't have it in the library
system.
>Hmmmm. I get the feeling one should just consider S^3 a bit before
>proceeding further.
Indeed, this is one of the simplest examples of a 3-dimensional
manifold, apart from good old R^3, so it's very important to be
able to visualize it if you have any intention of using visual
intuition in your study of manifolds (which I personally find
essential).
>So what does S^3 look like?
>The most plausible physical description (without doing any math on them)
>would appear to be a spherical balloon, at least if your path crosses
>the center. It has zero radius until our viewpoint in time enters the 4D
>sphere then we would see the balloon inflate until time hits the (time)
>center of the sphere, when the balloon's radius would equal the radius
>of the 4D sphere, after which it would deflate to zero again.
Good! I wish you'd call it a 3-sphere or S^3 instead of a "4D sphere"
--- it conveys a superficial impression of expertise --- but apart
from that you are right on target here. Just as parallel slices of an
ordinary sphere (a 2-sphere) are circles (i.e. 1-spheres) of varying
radius, so parallel slices of a 3-sphere are 2-spheres of varying radius.
To back this up with some equations, take the 3-sphere described by
x^2 + y^2 + z^2 + t^2 = 1
and slice it with a hyperplane where t is constant. You get a
2-sphere described by
x^2 + y^2 + z^2 = 1 - t^2
which is a sphere of radius sqrt(1 - t^2). As t --- think of it
as time if you like! --- goes from -infinity to infinity, this
sphere blossoms into existence when t = -1, swells to radius 1
at t = 0, and then shrinks back to nothing at t = 1.
Here we are using our "time intuition" to visualize something in 4
dimensions as a kind of "movie" consisting of 3-dimensional "frames".
This is very handy, particularly in physics where the 4th dimension
often quite literally represents time. However, we can stretch it if
we wish, visualizing a 5-sphere as a movie whose frames are 4-spheres
that appear, grow, shrink and disappear again, or for that matter
visualizing any (n+1)-sphere as a movie of n-spheres... with rapidly
lessening clarity as n increases. But the key with this visualization
stuff is not to actually visualize things, but to fool yourself into
*thinking* you are.
>I am
>mostly convinced that a path that didn't go through the center just
>looks like a balloon that doesn't expand to the full radius. So I guess
>the whole group looks like an infinite number of inflating balloons of
>different radii (to inf) inflating and deflating all over space. Hang
>on, they all have radius one, so you get to see spheres of max size zero
>to one (in 3D) depending on the location of their origins.
>
>Is this about right?
Umm, I have a bit of trouble grokking this paragraph, but let me just
say that any slice of the 3-sphere
x^2 + y^2 + z^2 + t^2 = 1
by a hyperplane is a 2-sphere, and this 2-sphere has radius 1 if and
only if the hyperplane passes through the origin.
Another good way to visualize S^3 is by pretending you are in it ---
the "intrinsic" rather than "extrinsic" way of visualizing a manifold.
If we were in S^3 instead of R^3 --- we could in principle take a trip
around the universe by sailing off in a geodesic in any dimension.
Moreover, if we looked very carefully in any direction and stood still
long enough for light to make a round trip, way off in the distance we
would see the back of our head! (Assuming nothing obscured the view.)
I talked about this a bit in the last Week's Finds, where I described
S^n as the result of taking R^n and adding a "point at infinity". So:
you should also learn to visualize S^3 this way, as if you lived in it.
This is more practical than it might seem, since maybe we DO live in
S^3. In the standard big bang model, space is S^3 if there is enough
matter density. However, in the big bang model the radius of the S^3
changes with time and the universe recollapses before anyone moving
less than lightspeed can make a round trip. Above I was implicitly
assuming we inhabit an S^3 whose radius does not change with time.
This universe is called the "Einstein universe" since Einstein
proposed this shortly after inventing general relativity --- he needed
a cosmological constant to give negative energy density to the vacuum
to keep the radius of the S^3 from changing with time. As you know,
he called the cosmological constant his "biggest blunder", because
it prevented him from predicting the galactic redshifts, which were
shortly thereafter observed by Hubble. You win a few, you lose a few.
Well, this post is getting too long. (More precisely, someone is
urging me to get off the computer.) Thus I will defer detailed
discussions of SU(2) and U(2) to another post. But you have seen one
nice thing already, namely that SU(2) is a 3-sphere! So if we are
living in a 3-sphere, we are living in a GROUP! It may seem odd to be
stuck inside a mathematical object like a GROUP, but if you remember
that R^3 is also a group (with addition of vectors as the group
operation), it's really not so odd. In fact, you can think of R^3 as
a kind of "limit" of the group SU(2) as the radius of the latter goes
to infinity... but that is a topic for another course.
>Pete Skeggs wrote:
>>What do you think about the papers of Ning Li and D.G. Torr concerning
>>gravitomagnetic and gravitoelectric field generation in a rotating
>>superconductor exposed to a changing magnetic field?
>
> I don't know; I haven't read them. All I know is that any
> *measurable* effect of this sort must have nothing to do with the
> gravitomagnetic effect predicted by general relativity.
>
> The gravitomagnetic effect predicted by general relativity is not
> produced by magnetic fields; it is produced by a spinning mass. It is
> called "gravitomagnetic" because it is mathematically analogous
> to the magnetic field produced by a current loop.
Actually, that is what they talked about. In the superconductor the
trapped lattice ions, exposed to a rotating magnetic field, spin in
place at high velocity; they predict that the fact that these tiny
spinning masses all spin in lock-step results in all of the
gravitomagnetic effects adding up to form a macroscopically detectable
field -- in otherwords a macroscopic quantum effect, like
superconductivity itself. (at least that is what I think they claimed)
> Similarly, the "gravitoelectric effect" predicted by general relativity
> is the ordinary gravitational field produced by a static mass,
> analogous to the ordinary electric field produced by a static charge.
> Actually, people rarely call this the "gravitoelectric effect" ---
> usually they just call it gravity!
Ahh, that explains a lot! The end of the latest Li and Torr paper ends
with a statement that a measureable gravitoelectric effect should be
produced; the implications of this escaped me until your explanation.
-Pete Skeggs
> Steven Hall <Steve...@mit.edu> wrote:
>
> >John Baez wrote:
>
> >> "The curvature F
> >> of a U(1) connection is the electromagnetic field, and if we take the
> >> Lagrangian to be tr(F^2), we get Maxwell's equations."
>
> >Could you briefly explain:
> >(1) what a U(1) connection is?
> >(2) how to find its curvature?
[most of a good description snipped]
I understand (I think) what a 1-form is (eats vectors, spits out scalars).
I understand why it's watering down some to pretend a 1-form is a vector
field (vectors don't eat vectors and spit out scalars). I'm assuming that
the vectors that the 1-form eats are the displacements around the
infinitesimal loop.
The part I don't quite understand is why a 1-form can be described as a
"U(1) connection". That is, if I take the vector potential, and perform
some operations on it (march around an infinitesimal loop, etc.), I get a
number, which is the phase change, where I can relate the phase \phi to an
element u of U(1) by u=e^{i \phi}. But I could just as easily say I get a
change in something, \phi, where \phi is just an element of R, the real
line, and not call it a phase.
I guess I'm asking, What makes this a U(1) connection, rather than, say an
R connection? Maybe I'm really asking is what the definition of an
X-connection is? Or have you already defined it, as something that takes
an infinitesimal loop and spits out an element of X?
> >Unfortunately, MIT (your alma mater?) doesn't have [your book] in the library
> >system.
>
> Hmmph! You should get them to order it --- tell them I won't respond
> to any fundraising appeals until they do.
[with my MIT hat on...] Maybe you could donate one?
>I understand (I think) what a 1-form is (eats vectors, spits out scalars).
>I understand why it's watering down some to pretend a 1-form is a vector
>field (vectors don't eat vectors and spit out scalars).
Oh, good. I'm afraid I was replying in a more elementary way
than necessary, then.
>I'm assuming that
>the vectors that the 1-form eats are the displacements around the
>infinitesimal loop.
Right, though it doesn't need to be a loop, of course. The 1-form eats the
tangent vector to a path and spits out a scalar describing the infinitesimal
change in the phase of a charged particle moving along that path.
>The part I don't quite understand is why a 1-form can be described as a
>"U(1) connection". That is, if I take the vector potential, and perform
>some operations on it (march around an infinitesimal loop, etc.), I get a
>number, which is the phase change, where I can relate the phase \phi to an
>element u of U(1) by u=e^{i \phi}. But I could just as easily say I get a
>change in something, \phi, where \phi is just an element of R, the real
>line, and not call it a phase.
Correct!
>I guess I'm asking, What makes this a U(1) connection, rather than, say an
>R connection?
In most situations you could indeed think of it as an R connection.
The basic advantages to thinking of it as a U(1) connection are:
1) In quantum mechanics the phase is better thought of as an element
of U(1) than as an element of R. As you note, any number phi in R
gives us an element exp(i phi) of U(1), but many different numbers
phi give the same phase exp(i phi). In physics what matters
is the number exp(i phi).
2) Quantization of charge is "explained" if we assume we are
working with a U(1) connection instead of an R connection.
If I were in less of a rush, I would say a lot more about this...
>Maybe I'm really asking is what the definition of an
>X-connection is?
>Or have you already defined it, as something that takes
>an infinitesimal loop and spits out an element of X?
No... a rough definition is that it's something takes a
*path* and spits out an element of the group X. However,
this is really only true in a special case --- the case
of "trivial X-bundles". For the full story you need to
learn more about "bundles". (My book goes into that stuff.)
Right, OK. Now you are getting down to some nitty gritty.
In article <5sgbss$c...@charity.ucr.edu>, john baez <ba...@math.ucr.edu>
writes
>In article <Steven_Hall-07...@nyquist.mit.edu>,
>Steven Hall <Steve...@mit.edu> wrote:
>
>>I understand (I think) what a 1-form is (eats vectors, spits out scalars).
>>I understand why it's watering down some to pretend a 1-form is a vector
>>field (vectors don't eat vectors and spit out scalars).
>>I'm assuming that
>>the vectors that the 1-form eats are the displacements around the
>>infinitesimal loop.
>
>Right, though it doesn't need to be a loop, of course. The 1-form eats the
>tangent vector to a path and spits out a scalar describing the infinitesimal
>change in the phase of a charged particle moving along that path.
Woah, please.
We have to be talking about something to do with QED, surely.
Perhaps I had better expose my ignorance so as to be slapped into
correct thinking by the assorted bods on this thread, and probably
others given half a chance.
Hmmm. What am I actually visualising here? I think I am seeing a field
of 1-forms (doubtless someone will exhibit an example in due course)
that I guess are expressing an electromagnetic field. This is not a
problem.
For convenience I am imagining a slow test particle (say an electron)
trundling through a static electric field, so I can initially neglect
magnetism and time variations. Unfortunately I never really did this
sort of stuff formally (or informally come to that), so I only visualise
it from gleanings, anyhow I must proceed. I can't remember exactly how
you are supposed to handle electrons under QED, but the general idea
seems to be that the probability of finding an electron at a point is
merely the summation of the phases from all possible paths. Conceptually
simple but mathematically tedious.
One could easily believe that since, in the absence of an electric
field, an electron travels in a straight line because the probabilities
a little distance (ie many wavelengths) from this line are very low
because the phases cancel out, or conversely close to the path (eg one
wavelength) they reinforce. I note the symmetry. Now, if I were to
superpose a 'field' that increased all the phases a bit on one side of
the path, and decreased them on the other side the particle should turn,
most easily visualised from considering the circular symmetry of the
straight electron path, and the linear superposition of the 'field'.
Is this somewhere near the formalism?
It would appear to me that the concept of 'force' is redundant in this
formalism, since the particle is simply proceeding along the path
determined by the phases, and hence it's probability of existance at any
point.
>>I guess I'm asking, What makes this a U(1) connection, rather than, say an
>>R connection?
And I want to know what a 'connection' is!
Or, more exactly, how you derive one.
>2) Quantization of charge is "explained" if we assume we are
>working with a U(1) connection instead of an R connection.
A bit later on, perhaps?
>If I were in less of a rush, I would say a lot more about this...
One is sometimes grateful for small mercies ..... :-)
>Woah, please.
>
>We have to be talking about something to do with QED, surely.
Not really full-fledged QED: we haven't quantized the electromagnetic
field yet. We are trying to understand the *classical*
electromagnetic field by seeing what it does to *quantum* charged
particles, e.g. electrons. This is a kind of halfway house en route
from classical electromagnetism to QED: a "semiclassical" approach.
I think this is the easiest way to start understanding the deep inner
meaning of the electromagnetic field, and a good warmup for
understanding other gauge theories. In every gauge theory we have a
group G of symmetries of some sort of particle. In the case of
electromagnetism this group is U(1), the group of phases, i.e. unit
complex numbers. In what follows I'll talk about electromagnetism,
but you should secretly bear in mind that if we changed the group from
U(1) to something else, we would be describing some other force. (The
case of U(1) is especially simple and thus a bit misleading, so also
bear in mind that we'll have to work a bit harder when we turn other
groups. Luckily you've already done general relativity which is the
case where G = SO(3,1).)
Okay. Say we've got a particle. We're treating it quantum-
mechanically, but we're gonna be pretty crass about its position and
momentum; we'll mainly worry about its "internal" degrees of freedom.
If you don't know what this means, don't worry about it. Let's say
the particle is in some state psi. Since we're treating the particle
quantum-mechanically, psi is a vector in some vector space or other.
Now say that g is an element of the group U(1), i.e. a phase.
Then we can multiply psi by this phase and get a new vector g psi.
Now: the electromagnetic field is described by a U(1) connection.
As I've explained several times, a U(1) connection can be thought
of as a gadget that assigns to any path in spacetime an element of
U(1). The idea is as follows: if we start with a particle in the state
psi, and move it along the path, its state will change. We want to know
what it's new state is. And here's how we figure it out: we ask our
U(1) connection "what element of U(1) do you assign to this path?"
and it says "g". So you take the vector psi and multiply it by the
phase g and you get your answer:
g psi
is the particle's new state.
Actually this is what you do only if the particle has charge +1.
If the particle has charge -1, its state after you've dragged it
along the path will be
g^{-1} psi
If it has charge +3, its new state will be
g^3 psi
And so on: I hope you get the pattern!
But what was that stuff about the electromagnetic field as a 1-form?
Well, the point is, you can think of a 1-form as a U(1) connection.
I.e., you can think of a 1-form as a gadget that assigns to any path
in spacetime an element of U(1). Here's how: you take your 1-form
and integrate it along the path, and get a real number, say k, and
then you compute
e^{ik}
which is an element of U(1).
>Hmmm. What am I actually visualising here? I think I am seeing a field
>of 1-forms (doubtless someone will exhibit an example in due course)
>that I guess are expressing an electromagnetic field. This is not a
>problem.
I think I already explained what a 1-form looks like:
A = A_t dt + A_x dx + A_y dy + A_z dz
The component A_t is usually called the "electrostatic potential", and
for some reason people love to write it as "phi". People also often
call A_x, A_y, and A_z the "vector potential", though sometimes they
call the whole shebang, A, the "vector potential".
>For convenience I am imagining a slow test particle (say an electron)
>trundling through a static electric field, so I can initially neglect
>magnetism and time variations. Unfortunately I never really did this
>sort of stuff formally (or informally come to that), so I only visualise
>it from gleanings, anyhow I must proceed.
Okay, if you want a static electric field of magnitude E pointing in
the x direction, you want to take
A = - E x dt
>I can't remember exactly how
>you are supposed to handle electrons under QED, but the general idea
>seems to be that the probability of finding an electron at a point is
>merely the summation of the phases from all possible paths. Conceptually
>simple but mathematically tedious.
Here what I'm doing is considering a *single* path and telling you how
the electromagnetic field affects the amplitude for the particle to
take that path: it multiplies it by the phase g I was talking about
earlier. Then later, if we really wanted, we could sum over all
possible paths as you describe.
>One could easily believe that since, in the absence of an electric
>field, an electron travels in a straight line because the probabilities
>a little distance (ie many wavelengths) from this line are very low
>because the phases cancel out, or conversely close to the path (eg one
>wavelength) they reinforce. I note the symmetry. Now, if I were to
>superpose a 'field' that increased all the phases a bit on one side of
>the path, and decreased them on the other side the particle should turn,
>most easily visualised from considering the circular symmetry of the
>straight electron path, and the linear superposition of the 'field'.
>
>Is this somewhere near the formalism?
Yeah.
>It would appear to me that the concept of 'force' is redundant in this
>formalism, since the particle is simply proceeding along the path
>determined by the phases, and hence it's probability of existance at any
>point.
Correct!!! Very good!!! Why am I so excited? Well, remember how
we used to talk about how gravity is no longer a "force" in general
relativity? Now we're seeing something similar happening to
electromagnetism! This is what happens when you start describing
the forces using gauge theory.
>And I want to know what a 'connection' is!
Read the above stuff and ask some more questions.
>>2) Quantization of charge is "explained" if we assume we are
>>working with a U(1) connection instead of an R connection.
>
>A bit later on, perhaps?
You can already perhaps see a bit about quantization of charge
in what I wrote above. The point is, we can take a phase g and
raise it to the nth power and get g^n, which is what we need to do
when studying a particle of charge n, but it doesn't make sense to
take a phase and raise it to a fractional power.
The developments by this author and others (notably Hestenes, even Bohm)
published there make me wonder just how secure the arguments for field theory
(aka Second Quantization) are, as opposed to N-body particle theory in a
semi-classical realm. The only phenomena really left out are pair processes,
and I'm not even all that satisfied with how field theory accounts for them.
>I don't really understand how the change in phase affects the path.
I should have said something more explicit about this. Remember: we
are doing things semiclassically, so we are studying a *quantum*
particle in a *classical* electromagnetic field. Thus we never talk
of photons or the like --- the electromagnetic field is still just the
good old electric and magnetic fields, though we happen to be
describing it using the vector potential. However, our particle
subject to the influence of the electromagnetic field takes *all*
paths from here to there, weighted by a complex phase factor --- and
as Feynman explains in that book you read, if we sum up the results
from all paths from A to B, we get the amplitude for the particle to
get from A to B. Since I am concentrating on using this as a method
of explaining electromagnetism, I am not bothering to do the sum over
all paths: instead, I'm just talking about the phase a particle would
get if it moved along any old path.
This may seem like a funny way to explain the *classical* electromagnetic
field, but it has become a standard way, because it gets one to the idea
of a "connection" very quickly.
I hope, by the way, that you remember how the electric and magnetic
fields are related to the vector potential, so you see how this relates
to older ways of thinking about things. I've already written down
the formulas a few times, but ... did they sink in?
> In article <5thqoj$o...@agate.berkeley.edu>,
> Oz <O...@upthorpe.demon.co.uk> wrote:
> >I guess this may not be quite as
> >simple for more complex field types or you wouldn't have used this form.
> Right, here is where "group representation theory" comes in. For a
> pitifully easy group like U(1), the only way the phase g can "act on"
> the state psi is to multiply it by some power of g. As the lingo
> goes, "the irreducible representations of U(1) are parametrized by a
> single integer, the charge". For fancier groups, there are more ways
> the group can "act on" the particle, and understanding these is the
> key to understanding the classification of elementary particles.
This is what has always bothered me. I understand phases of
wavefunctions, so when you talk about dragging a particle around a
loop in a U(1) gauge theory and having the particle's phase change
(because phases are the Lie algebra of U(1)), I can grasp that.
But I don't really see what you end up doing to the wavefunction with
other groups whose algebras aren't simple complex numbers or anything.
Probably because I'm used to treating wavefunctions as just complex
functions which can easily be multiplied by a complex number (though
I do understand that they're geometric vectors in Hilbert space, etc.)
instead of working with representations on vector spaces.
I've also never understood exactly what kind of symmetry the gauge
symmetry is. Seeing that, say, something is symmetric under spatial
translation is easy.. the group element is acting to transform the
wavefunction to one representing something shifted in space. But what
exactly is a gauge symmetry group element doing to the wavefunction?
Anything "physical"?
So far we've talked about the groups U(1), SU(2) and U(2), but when it
comes to physics, we've really concentrated on U(1). U(1) is nice and
easy! Electromagnetism is all about U(1). Any particle has a
"charge" which is some integer multiple of the minimal unit of charge,
and this charge is crucial for figuring out what happens when we move
a particle through an electromagnetic field. For any path in
spacetime, the electromagnetic field gives an element g of U(1). If
we move a particle of charge n along this path, its state gets
multiplied by g^n. What could be simpler?
(Of course it would be nice to see how Maxwell's equations pop out
from this way of thinking, but that's a story for another day.)
What about SU(2), though? Let me explain how people started using
SU(2) to come up with a theory of the strong nuclear force. I should
emphasize at the outset that this theory is now known to be WRONG ---
or more precisely, just an approximate theory, superseded by the Standard
Model. Still, it's very enlightening, and it's nice to know a little
of the history of this stuff. As usual my "history" will be a kind
of folktale designed to explain physics, rather than a precise account
of all the details of what really happened.
Once upon a time, people thought that the only particles in nature
were protons, neutrons, electrons, and photons --- and their
antiparticles. To understand what photons and electrons do, all you
need to understand is electromagnetism. But protons and neutrons
stick together to form the nucleus, which --- since it has a net
positive charge --- must be held together by some *other* force, some
"strong force".
Now, since the electromagnetic force is carried by photons, it makes
sense to guess that the strong force is carried by something too.
Since the strong force dies off roughly exponentially with distance,
a little math suggests that the particle carrying it must have a mass.
This led Yukawa to posit a particle carrying the strong force and also
led him to guess its mass. People called this particle the "mesotron"
or "meson", since its mass was supposed to be intermediate between that
of the electron and that of the proton and neutron.
Sometime around this time people started finding mesons. They found a
"pi meson", which came in three different forms: one with charge +1,
one with charge -1, and one with charge 0. The positive one and the
negative one turned out to be antiparticles of each other, while the
neutral one was its own antiparticle (like the photon). They also
found a "mu meson" with charge -1, together with its antiparticle,
with charge +1.
The pi mesons turned out to interact strongly with protons and
neutrons in roughly the way Yukawa had guessed. I emphasize
*roughly*, since back in these days, nothing in particle physics ever
worked out quite as hoped for! For example, I don't think Yukawa had
dreamt of THREE pions. An even more blatant example was the mu meson,
which turned out NOT to interact strongly with protons and neutrons.
In fact, it turns out to be very much like a big brother of the
electron --- e.g., it decays into an electron. This led the physicist
I. I. Rabi to complain, famously: "WHO ORDERED THIS?"
Good question. But let's concentrate on the proton, neutron and pions
for now! Its a curious fact that the proton and neutron have almost
the same mass --- while about 1800 times as heavy as the electron,
they differ by only a few electron masses. They are suspiciously similar
in other ways, too. For example, they interact in quite similar ways
with the pions. The basic idea is that two protons can interact
strongly by passing a neutral pion back and forth, using the reactions
p -> p + pi0 (p = proton, pi0 = neutral pion)
and the reverse one
pi0 + p -> p.
Think of two big basketball players tossing a little pion back and
forth. The same thing works for neutrons:
n -> n + pi0 (n = neutron)
n + pi0 -> n
and using these reactions a proton can also play catch with a neutron.
More surprisingly, perhaps, you can have a proton emit a positive pion
and turn into a neutron
p -> n + pi+ (pi+ = positive pion)
and also the reverse process, and also
n -> p + pi-
and the reverse.
As you can already see from this, not only are the proton and neutron
pretty similar in what *they* can do, the pions are also similar to
*each other*. The charged pions, being antiparticles of each other,
have exactly the same mass as each other. They don't have the same
mass as the neutral one, but they are suspiciously *close* in mass ---
again, just a few electron masses different.
This led someone, maybe Heisenberg, to make the following guess.
Perhaps the neutron and proton are in some sense just two different
states of the same particle, the "nucleon". And perhaps the different
pions are also just different states of a single particle.
Now if you want to describe the nucleon and its two states, it's good
to use the vector space C^2. (Remember, Oz, C^2 consists of pairs of
complex numbers. I can't be reminding you of this stuff all the
time, it'll drive me nuts.) We can write these vectors as column
vectors and describe the proton using the "up" state:
p = 1
0
and describe the neutron using the "down" state:
n = 0
1
This was a very natural thing for Heisenberg to try, since he already
knew about the quantum mechanics of angular momentum, where an electron
can be in a "spin up" state or a "spin down" state. Heisenberg called
the corresponding quantity for the nucleon ISOSPIN.
I hear someone wondering: What *is* isospin, *really*?
Luckily, Heisenberg was a bit of a positivist, meaning that he was
more interested in getting the right answers than in being stumped
by questions like that! This attitude (which has its good points
and its bad points) is what led him to his "matrix mechanics"
approach to quantum mechanics. It also scored him a big success
with this "isospin" idea.
Now for the kicker --- if we have an element g in SU(2), which
after all is just a 2x2 unitary matrix with determinant 1, we
can apply this matrix to a column vector and get a new column
vector! So we can take a state of the nucleon, hit it with an
element of SU(2), and get a new state! In some sense we can
"rotate" --- maybe I should say "isorotate" --- the neutron to
get the proton, and vice versa.
(To really appreciate this Oz had better learn the quantum mechanics
of angular momentum, so he sees the relation between rotations ---
more precisely SO(3) --- and SU(2). Heisenberg knew all about this,
which is what motivated him to make a bunch of useful analogies between
spin and isospin.)
Similarly, we can describe states of the pion by vectors in C^3:
pi+ = 1
0
0
pi0 = 0
1
0
pi- = 0
0
1
and there is a way to take an element g in SU(2) and apply it to
a vector in C^3 and get a new vector in C^3, thus "rotating" any
of these states of the pion to any other state. (The details
here are not relevant just now.... but it's very fun stuff.)
Anyway, Heisenberg and other used these ideas to get theories of
the strong force which more or less sort of worked... back then,
nothing ever quite worked all the way, when it came to particle
physics.
Now I want to flash forward to the work of Yang and Mills. This
is really the punchline of my story. They came up with the following
idea: maybe the strong force is described by an SU(2) connection!
In other words, maybe the strong force assigns to each path in
spacetime an element g in SU(2), and when you move a nucleon
along that path, its state changes, and the way it changes is that
you hit it with the element g! Ditto for the pions!
What would this mean? Roughly speaking, it'd mean that there's
no absolute distinction between a proton and a neutron: you could
move a proton from here to there and it could turn into a neutron,
or vice versa.
But that's a sloppy way to put it. A deeper way to put it is this:
the only real way you could experimentally tell if two nucleons
were in the same state is to move them next to each other. And the
answer would depend on the path you moved them along!
It's just like general relativity, where you can't tell whether
two arrows at different points are pointing the same way: the best
you can do is move them together and *then* compare them, and the
answer depends on the path you use.
This is the basic idea of gauge theory: "no comparison at a distance".
The only way you can tell if two things are "the same" is if they
are at the same point in spacetime. It makes a certain curious
sense: how could they be the same and in different places, afer all???
But the really cool thing is what happens when you actually try
to compare two things by bringing them to the same place. How
you move something from here to there AFFECTS what it will be like
when it gets there, and the WAY the path affects the result is
ultimately all we mean by a "force"! Each force of nature
describes a different aspect of what happens when we try to see
if two things are the same by moving them to the same place.
[this was actually at the end of your post, but I'd like to address it
first]
You (although maybe not Oz) should actually be familiar with other
types of wave functions. For example the two-component spinor
wavefunctions which describe spin 1/2 particles in non-relativistic
quantum mechanics. In a relativistic theory, you find you need
4-component Dirac spinors to describe spin 1/2 particles.
But, this is not quite what we're talking about in gauge theories.
>This is what has always bothered me. I understand phases of
>wavefunctions, so when you talk about dragging a particle around a
>loop in a U(1) gauge theory and having the particle's phase change
>(because phases are the Lie algebra of U(1)), I can grasp that.
>But I don't really see what you end up doing to the wavefunction with
>other groups whose algebras aren't simple complex numbers or
>anything.
Okay, the thing is, with other gauge groups, particles show up in
families or multiplets. (Representations, if you're a mathematician).
U(1) is really simple so you just get these individual particles.
John is talking about SU(2) isospin in another post. Related to
isospin is SU(3)-flavor, which is also approximate. SU(3) also occurs
as an exact symmetry of the color degrees of freedom in QCD. So, I'll
say a little about that.
In this case you should think of quark wavefunctions as 3-component
objects (ignoring the Dirac spinor nature of the thing or the
electroweak aspects. Just pure QCD here). Now, there is a
representation of su(3) which consists of eight 3x3 matrices and the
connection of this SU(3) gauge theory would assign some linear
combination of these matrices to a given path and the quark
wavefunction would be multiplied by this matrix to tell you what
happens when the quark follows this path.
Gluons are somewhat more complicated particles (being color-anticolor
combinations) so they would transform in a more complicated manner
(according to the larger representation of SU(3) to which they belong).
I think. Of course, gluons are the gauge bosons in this theory so
maybe they are treated differently. Hmmm...guess we'll have to wait
for John to finish the semi-classical stuff he's talking about now and
move on to a fully-quantized theory to get a definitive answer to
this.
--
======================================================================
Kevin Scaldeferri Calif. Institute of Technology
"Pragmatism! Is that all you have to offer?"
>Now, there is a
>representation of su(3) which consists of eight 3x3 matrices and the
>connection of this SU(3) gauge theory would assign some linear
>combination of these matrices to a given path and the quark
>wavefunction would be multiplied by this matrix to tell you what
>happens when the quark follows this path.
>Gluons are somewhat more complicated particles (being color-anticolor
>combinations) so they would transform in a more complicated manner
>(according to the larger representation of SU(3) to which they belong).
>I think. Of course, gluons are the gauge bosons in this theory so
>maybe they are treated differently. Hmmm...guess we'll have to wait
>for John to finish the semi-classical stuff he's talking about now and
>move on to a fully-quantized theory to get a definitive answer to
>this.
A quark is given by a 3D column vector.
(1)
(0) = red,
(0)
(0)
(1) = green, and
(0)
(0)
(0) = blue form a basis for quarks.
(1)
Antiquarks are 3D row vectors, with a basis of
(1,0,0) = antired, (0,1,0) = antigreen, and (0,0,1) = antiblue.
Gluons are 3 x 3 matrices whose trace is 0, with a basis of
(0,1,0)
(0,0,0) = red antigreen,
(0,0,0)
(0,0,1)
(0,0,0) = red antiblue,
(0,0,0)
(0,0,0)
(1,0,0) = green antired,
(0,0,0)
(0,0,0)
(0,0,1) = green antiblue,
(0,0,0)
(0,0,0)
(0,0,0) = blue antired,
(1,0,0)
(0,0,0)
(0,0,0) = blue antigreen,
(0,1,0)
(sqrt2/2,0, 0)
(0, -sqrt2/2,0) = (red antired - green antigreen)/sqrt2, and
(0, 0, 0)
(sqrt2/2,0,0 )
(0, 0,0 ) = (red antired - blue antiblue)/sqrt2.
(0, 0,-sqrt2/2)
And now you know why there are only 8 gluons;
they have to have trace zero.
Every particle which has no colour (like a photon)
belongs to the trivial representation of SO(3).
Say, each one is the identity 3x3 matrix.
Anyway, if you have a quark q and move it along a path
to which the connection assigns the element M of SO(3),
then the quark turns into Mq.
If you have an antiquark a, it turns into aM'.
If you have a gluon g, it turns into MgM'.
By "M'", I mean the inverse of M.
Now, a quark and an antiquark, multiplied together, form a 3x3 matrix.
If this matrix is the identity matrix,
the quark and antiquark can annihilate to form a photon.
If this matrix is a gluon,
the quark and antiquark can annihilate to form a gluon.
Similarly, a quark and a gluon can multiply together to form a quark
(and the same thing with antiquarks and a gluon),
which tells you what happens when a quark (or antiquark)
emits or absorbs a gluon.
If g and h are two gluons,
they combine to form the gluon (gh - hg)/sqrt2.
I think.
(Of course, you have to keep track of flavour, charge, and momenergy
when these interactions happen in real life.)
-- Toby
to...@ugcs.caltech.edu
Nathan Urban wrote:
>This is what has always bothered me. I understand phases of
>wavefunctions, so when you talk about dragging a particle around a
>loop in a U(1) gauge theory and having the particle's phase change
>(because phases are the Lie algebra of U(1)), I can grasp that.
Nitpicking: phases are the unitary irreducible representations of U(1).
The Lie algebra of U(1) is rather boring: [a,b] = 0 for all a and b.
>But I don't really see what you end up doing to the wavefunction with
>other groups whose algebras aren't simple complex numbers or anything.
Write an element of U(1) as
exp (i s)
where s is a _real_ number. Now imagine an (n x n) matrix which is
diagonal for now, with entries
diag{ exp(i s_1), exp(i s_2),... exp(i s_n) }
where again, each s_i is a real number. This can be written as
exp (i H)
where H is a matrix with real numbers {s_i} all along the diagonal.
H is a special case of a Hermitian matrix, where the complex conjugate
transpose equals the original matrix. Hermitian matrices always have
real numbers as eigenvalues, which is another way of saying they will
be diagonal matrices of real numbers in some basis (its eigenvector basis).
Now you can see what U(n) is. It's just the matrices of the form
exp (i H)
where H is an (n x n) Hermitian matrix. Note that
U^\dag U = exp (-i H^\dag ) exp (i H) = exp (-i H) exp (i H) = I
where "^\dag" is shorthand for Hermitian conjugate (complex conj. transpose).
This shows the relation to the usual definition of a unitary matrix. This
is a particularly useful way for a physicist to write a unitary matrix, since
physical things, like electric or color charge, are always described by
_real numbers_.
What does this do to a wavefunction? Well, wavefunctions must now be thought
of as n-component (column) vectors. In some basis, you are just multiplying
each component of the wavefunction by a phase, but now the phases can depend
on the component index (in contrast to U(1)). In an arbitrary basis, you
are both changing the phases of the components, and mixing them amongst
themselves (see below).
As a gauge group, U(1) is useful because we don't expect anything physical
to depend on the overall phase of a wavefunction, since all physical
measurements involve products like \psi* \psi, and the phase gets lost.
When we try to make the phase invariance _local_ (a function of spacetime
coordinates) is when electromagnetism is forced upon us, in a nifty sort
of way.
If we generalize wavefunctions to many components, we must generalize
measurements to things like \psi^\dag \psi, and now it is not phases
but unitary matrices which leave physical things invariant. When we make
the unitary transformations local, we get the fancier gauge "forces",
like SU(3) giving the color (strong) force.
- Paul
I am slightly confused here (!). Clearly the representation exp(iH)
needs some explanation. The only way I can see to make sense of the
expression exp(iH) is to consider it via an expansion
exp(iH) = 1 + iH - HH/2! + .... etc.
but I can't offhand see that this should put any restriction on H such
that the complex conjugate transpose should equal H. I am obviously
confused (not for the first time).
>Now you can see what U(n) is. It's just the matrices of the form
>
> exp (i H)
>
>where H is an (n x n) Hermitian matrix. Note that
>
> U^\dag U = exp (-i H^\dag ) exp (i H) = exp (-i H) exp (i H) = I
Sheef. Wot! Jeez, how am I supposed to decipher this?
>
>where "^\dag" is shorthand for Hermitian conjugate (complex conj. transpose).
Oh good, I always wondered what dag (short for dagger?) meant. Aha hang
the on a trice the above inpenetrable expression then becomes (using U'
for U^\dag, and H' for H^\dag)
UU' = exp (iH) exp (-iH') and aha the criterion for a hermetian matrix
must therefore be that H' = H. Hmmm, this seems somewhat circular. I
suppose I was expecting to find some reason for the requirement for a
Hermitian conjugate. I am sure it's lurking here somewhere, but where?
>What does this do to a wavefunction? Well, wavefunctions must now be thought
>of as n-component (column) vectors. In some basis, you are just multiplying
>each component of the wavefunction by a phase, but now the phases can depend
>on the component index (in contrast to U(1)). In an arbitrary basis, you
>are both changing the phases of the components, and mixing them amongst
>themselves (see below).
>
>As a gauge group, U(1) is useful because we don't expect anything physical
>to depend on the overall phase of a wavefunction, since all physical
>measurements involve products like \psi* \psi, and the phase gets lost.
>When we try to make the phase invariance _local_ (a function of spacetime
>coordinates) is when electromagnetism is forced upon us, in a nifty sort
>of way.
>
>If we generalize wavefunctions to many components, we must generalize
>measurements to things like \psi^\dag \psi,
For U(1) isn't psi^\dag psi*, so presumably just a special case.
>and now it is not phases
>but unitary matrices which leave physical things invariant. When we make
>the unitary transformations local, we get the fancier gauge "forces",
>like SU(3) giving the color (strong) force.
In John's absence, you wouldn't fancy showing how U(1) pulls out EM
would you? In a *very* elementary way. The key seems to be making things
invariant locally, which I feel will bring in action about which I know
essentially zero.
......
>A quark is given by a 3D column vector.
>(1)
>(0) = red,
>(0)
.........
>Antiquarks are 3D row vectors, with a basis of
>(1,0,0) = antired, (0,1,0) = antigreen, and (0,0,1) = antiblue.
>Gluons are 3 x 3 matrices whose trace is 0, with a basis of
>(0,1,0)
>(0,0,0) = red antigreen,
>(0,0,0)
........
>(sqrt2/2,0, 0)
>(0, -sqrt2/2,0) = (red antired - green antigreen)/sqrt2, and
>(0, 0, 0)
........
>Every particle which has no colour (like a photon)
>belongs to the trivial representation of SO(3).
>Say, each one is the identity 3x3 matrix.
Cor! I just love the above as a representation. However am I right that
this just describes 'colour'? In otherwords colourless particles all
look the same (the identity), which is not unreasonable. However, then
you ought to go further and make a representation that differentiates
non-coloured particles and their other attributes.
>Anyway, if you have a quark q and move it along a path
>to which the connection assigns the element M of SO(3),
>then the quark turns into Mq.
>If you have an antiquark a, it turns into aM'.
>If you have a gluon g, it turns into MgM'.
>By "M'", I mean the inverse of M.
>
>
>Now, a quark and an antiquark, multiplied together, form a 3x3 matrix.
>If this matrix is the identity matrix,
>the quark and antiquark can annihilate to form a photon.
Ok, but is (in this representation) an electron also represented by the
identity matrix. OK, I can see there are other criteria, but they don;t
come from your simple scheme above.
>(Of course, you have to keep track of flavour, charge, and momenergy
>when these interactions happen in real life.)
So how big and complicated would a complete representation of a
'generalised' particle be?
>In this case you should think of quark wavefunctions as 3-component
>objects (ignoring the Dirac spinor nature of the thing or the
>electroweak aspects. Just pure QCD here). Now, there is a
>representation of su(3) which consists of eight 3x3 matrices and the
>connection of this SU(3) gauge theory would assign some linear
>combination of these matrices to a given path and the quark
>wavefunction would be multiplied by this matrix to tell you what
>happens when the quark follows this path.
Hang on a tick. If I peer through the murk I get the impression that we
have an object, here presumably a (non-existant) single free quark
traversing a path in a strong field. This object is described by three
parameters and the field by some operation on these parameters dependent
on the path (which I presume to be 4-D). What I am trying to grok (in
John's absence) is the realtionship between the symmetry group and all
of this. Is it that the 'operation' is a group operation and so converts
the quark (parameters a,b,c) into a quark (a',b',c') where (a,b,c),
(a',b',c') are always in the group.
I also get (with my murky peering) the impression that the quark also
generates a field. I guess there ought to be some connection between the
quark and *it's* field. Indeed (as one does even with classical EM) one
begins to wonder why one needs a representation of the quark as a
something at all. This all looks like fields interacting with fields. If
one could have some plausible reason why a field configuration just like
a quark (or electron) should be self-stable then one could dispense with
particles altogether. As far as I am aware nobody has yet figured this
out, even for an electron. I guess we more-or-less have for a photon,
though.
Visually I 'see' an electric field as distorting 'electric' spacetime in
the same way that gravity distorts inertial spacetime. Since I know
essentially nothing about it all, that is about as far as it gets. I am
slightly struck by the observation that a photon seems to be an electric
field travelling entirely in the space dimension, and an electron an
electric field travelling in the time direction.
>OK, OK, OK, you stated this, but it seemed just a statement, not an
>explanation.
Indeed. This was *not* my explanation of how electromagnetic field is
related to a U(1) connection. The stupid formula above is not true in
general, and it's not particularly important: it is the result of a
little calculation I did when you asked for an *example* --- you
wanted to see what A might look like for a static uniform electric
field pointing in the x direction.
Here's the explanation again! Read it and let it sink in.
Okay, so: move your particle around a little square in the xy plane.
Say the sides of the square are length epsilon. Its phase gets multiplied
by approximately
exp(ik epsilon^2)
where the number c depends on the electromagnetic field where your
little square is. What's that number k? It's the charge of your
particle times the Z COMPONENT OF THE MAGNETIC FIELD!
If we used a little square in yz plane we'd similarly get an answer
involving the X COMPONENT OF THE MAGNETIC FIELD.
If we used a little square in zx plane we'd similarly get an answer
involving the Y COMPONENT OF THE MAGNETIC FIELD.
What about if we move it around a little square in the xt plane,
where t is the time axis? This is a little harder to do (I can
explain how if you can't figure out how), but if you do it, you
get the X COMPONENT OF THE ELECTRIC FIELD!
If we used a little square in yt plane we'd similarly get an answer
involving the Y COMPONENT OF THE ELECTRIC FIELD.
If we used a little square in zt plane we'd similarly get an answer
involving the Z COMPONENT OF THE ELECTRIC FIELD.
Those are all the choices.
Now, I haven't explained where Maxwell's equations come from,
but I've explained how the electromagnetic field comes out of
U(1), which is a first step.
>Toby Bartels <to...@ugcs.caltech.edu> wrote:
[Unnecessary quoted text deleted by long-suffering moderator.
No, wait -- *I* deleted it! The moderator didn't have to! :-)]
>Cor! I just love the above as a representation. However am I right that
>this just describes 'colour'? In otherwords colourless particles all
>look the same (the identity), which is not unreasonable. However, then
>you ought to go further and make a representation that differentiates
>non-coloured particles and their other attributes.
Yes, you are right.
As Kevin said, "Just pure QCD here".
>>Now, a quark and an antiquark, multiplied together, form a 3x3 matrix.
>>If this matrix is the identity matrix,
>>the quark and antiquark can annihilate to form a photon.
>Ok, but is (in this representation) an electron also represented by the
>identity matrix. OK, I can see there are other criteria, but they don't
>come from your simple scheme above.
As far as QCD is concerned, the particles might annihilate form an electron.
Except for mass, QCD doesn't know photons and electrons are different.
But they are very different, and that's what the next quoted sentence is for.
>>(Of course, you have to keep track of flavour, charge, and momenergy
>>when these interactions happen in real life.)
>So how big and complicated would a complete representation of a
>'generalised' particle be?
The complete representation of any particle depends on the particle.
After all, the QCD representation of a quark
is not like the QCD representation of a gluon.
Some things, like flavour, cannot be distinguished
by thinking of them as representations of a symmetry group.
A red up quark and a green up quark are the same particle in different states,
but a red up quark and a red down quark are really different particles,
which you can tell because they have different masses.
Now, I might tell you the complete representation of, say, an up quark
under the U(1) x SU(2) x SU(3) symmetry of the standard model.
But I don't know how electroweak symmetry works, so I can't.
And the representation of, say, an electron would be something different.
-- Toby
to...@ugcs.caltech.edu
>I understand phases [....]
>But I don't really see what you end up doing to the wavefunction with
>other groups whose algebras aren't simple complex numbers or anything.
>Probably because I'm used to treating wavefunctions as just complex
>functions which can easily be multiplied by a complex number (though
>I do understand that they're geometric vectors in Hilbert space, etc.)
>instead of working with representations on vector spaces.
As you suggest, the point is that the wavefunction of a particle is
typically not complex-valued, but takes values in a complex vector
space that is a representation of some group. In the Standard Model,
for example, the wavefunction of any particle takes values in a
representation of SU(3) x SU(2) x U(1). The SU(3) corresponds to
"color" --- the charge of the strong force --- while the SU(2)
corresponds to "weak isospin" and the U(1) corresponds to
"hypercharge" --- which, taken together, are the charges of the
electroweak force.
By the way, the U(1) here is *not* simply the U(1) of
electromagnetism; the electromagnetic U(1) sits inside the above SU(2)
x U(1) in an askew sort of way, so that electric charge is a linear
combination of hypercharge and weak isospin. This should serve as a
reminder that the electromagnetic U(1) typically does *not* act simply
by multiplication by a phase in the obvious way --- so that even the
part you thought you understood is less obvious than one might have
hoped.
Let's consider a little part of this theory: the theory of quarks
interacting via the strong force. Then the relevant symmetry group is
SU(3). The wavefunction of a quark is not simply complex-valued; it
takes values in C^3. (Here I am temporarily ignoring their spin!)
The vector space C^3 is a representation of SU(3) in an obvious way:
you take your 3x3 matrix and multiply your vector in C^3 by it and get
another vector in C^3.
The fact that quark wavefunctions take values in C^3 means that a
quark can come in 3 states or "colors", often whimsically called
1
red = 0
0
0
green = 1
0
and
0
blue = 0
1
I may have gotten the names mixed up since it is an arbitrary and
utterly unimportant convention which vector you call "red", which you
call "green" and so on. However, the fact that quarks come in 3
states like this is very important! If not for this, you couldn't get
baryons with 3 identical quarks having spin up, thanks to Pauli
exclusion. Also, there is very nice evidence that there are exactly
*3* colors coming from the cross-section for colliding positron-
electron pairs to form hadrons: the fact that quarks have 3 color
states increases this cross-section by a certain precise factor.
In short, while a bit complicated, this stuff is not impossible to get
a feeling for, and it explains a huge amount of experimental data. I
find Kerson Huang's "Quarks, Leptons and Gauge Fields" to be a good
intro to this stuff. (One should know some quantum field theory first,
though.)
>I've also never understood exactly what kind of symmetry the gauge
>symmetry is. Seeing that, say, something is symmetric under spatial
>translation is easy. the group element is acting to transform the
>wavefunction to one representing something shifted in space. But what
>exactly is a gauge symmetry group element doing to the wavefunction?
>Anything "physical"?
Sure it's "physical" --- especially if you put quotes around it like
that! For example, in the theory of quarks, the element
(0 1 0)
(1 0 0)
(0 0 1)
of SU(3) acts to "rotate" the state of a red quark so that it becomes
green:
(0 -1 0) (1) (0)
(1 0 0) (0) = (1)
(0 0 1) (0) (0)
In a sense, this is just as "physical" as rotating a spin-up particle
so that it's spin-down:
(0 -1) (1) = (0)
(1 0) (0) (1)
However, you are certainly entitled to find this answer a bit
unsatisfying. Having grown up in Minkowski space, we are quite at
home in it and find its "geometrical" symmetries quite intuitive ---
they are pretty easy to detect through macroscopic experiments. On
the other hand, "internal" symmetries like the SU(3) x SU(2) x U(1) of
the Standard Model only become visible through elementary particle
experiments, so they are rather mysterious by comparison. The goal of
all unified field theories --- GUTs (grand unified theories),
Kaluza-Klein theories, superstring theories and the like --- is to
provide some sort of simplified account of these internal symmetries.
In GUTs, the idea is to find a simple Lie group (in the technical
sense) containing the symmetry group of the standard model --- SU(5)
has long been a favorite. In Kaluza-Klein theories and superstring
theoryk, the idea is to think of "internal" symmetries as
"geometrical" symmetries of little curled-up dimensions. However,
none of these approaches fully answers the question "why this symmetry
group?"
This question is clearly a point at which modern physics trails off
into the unknown. As usual with difficult questions, the question
probably needs to be asked in some new and different way before we
make real progress.
In article <5tv9nn$8...@severi.mit.edu>, John Baez <ba...@math.mit.edu>
writes
>Now for the kicker --- if we have an element g in SU(2), which
>after all is just a 2x2 unitary matrix with determinant 1, we
>can apply this matrix to a column vector and get a new column
>vector! So we can take a state of the nucleon, hit it with an
>element of SU(2), and get a new state! In some sense we can
>"rotate" --- maybe I should say "isorotate" --- the neutron to
>get the proton, and vice versa.
Karumbldump. Intrinsically it seems unreasonable for the new state to be
either neutron or proton. It seems (since paths are infinitessimally
divisible at this level) to me that one ought to be able to have a state
that is part neutron and part proton. So long as nobody notices would
this be all right? Of course you would also have a part pion, but if you
got far enough away, you wouldn't be able to tell the difference.
>(To really appreciate this Oz had better learn the quantum mechanics
>of angular momentum, so he sees the relation between rotations ---
Yup. This would indeed have been smart. It would have been starting at
the beginning. Neither are things usually associated with Oz,
unfortunately. Not least because he isn't too smart, and doesn't know
enough to know where the beginning actually IS!
>more precisely SO(3) --- and SU(2). Heisenberg knew all about this,
>which is what motivated him to make a bunch of useful analogies between
>spin and isospin.)
......................
>Now I want to flash forward to the work of Yang and Mills. This
>is really the punchline of my story. They came up with the following
>idea: maybe the strong force is described by an SU(2) connection!
>In other words, maybe the strong force assigns to each path in
>spacetime an element g in SU(2), and when you move a nucleon
>along that path, its state changes, and the way it changes is that
>you hit it with the element g! Ditto for the pions!
>
>What would this mean? Roughly speaking, it'd mean that there's
>no absolute distinction between a proton and a neutron: you could
>move a proton from here to there and it could turn into a neutron,
>or vice versa.
>
>But that's a sloppy way to put it. A deeper way to put it is this:
>the only real way you could experimentally tell if two nucleons
>were in the same state is to move them next to each other. And the
>answer would depend on the path you moved them along!
>
>It's just like general relativity, where you can't tell whether
>two arrows at different points are pointing the same way: the best
>you can do is move them together and *then* compare them, and the
>answer depends on the path you use.
OK. This has a certain semblance of reasonableness.
>This is the basic idea of gauge theory: "no comparison at a distance".
>The only way you can tell if two things are "the same" is if they
>are at the same point in spacetime. It makes a certain curious
>sense: how could they be the same and in different places, afer all???
Perhaps rather, how can you be sure you are comparing two things in an
identical situation.
>But the really cool thing is what happens when you actually try
>to compare two things by bringing them to the same place. How
>you move something from here to there AFFECTS what it will be like
>when it gets there, and the WAY the path affects the result is
>ultimately all we mean by a "force"! Each force of nature
>describes a different aspect of what happens when we try to see
>if two things are the same by moving them to the same place.
Unagahh. Ok, this might be considered to be taking things a little to
extremes but intrinsically the logic seems a little hard to attack.
Since on another part of this thread Arendt has suggested that locality
pops EM from the group representation (and other 'forces' from other
groups) I suspect there is a reason for this line of thought. With luck
Arendt will have started the ball rolling before you return.
Right. As I mentioned when I started this discussion of SU(3)-color,
we're ignoring all the other degrees of freedom at the moment. (Just
like John ignored everything but the electromagnetic interactions in
his U(1) discussion.
>So how big and complicated would a complete representation of a
>'generalised' particle be?
Pretty big and complicated. The gauge group of the Standard Model is
SU(3)xSU(2)xU(1). This is an ugly thing and most people would like to
see it end up being a subgroup of some larger Lie group. The simplest
group this could be it SU(5) but this seems to be excluded by proton
decay experiments by now. There are plenty of other possibilities for
a grand unified theory, as it is called, but the experiments to test
these theories are at the limits of our current experimental
abilities.
Of course, then you still have gravity to deal with and that's a mess
all of its own.
Fortunately, we often work in regimes where only one of the forces is
relevant, so we can ignore the others. Unfortunately, this is not the
case for lots of interesting issues like black holes or wormholes or
the birth of the universe.
>Paul Arendt <par...@nmt.edu> wrote:
>>Now imagine an (n x n) matrix which is
>>diagonal for now, with entries
>> diag{ exp(i s_1), exp(i s_2),... exp(i s_n) }
>>where again, each s_i is a real number. This can be written as
>> exp (i H)
>>where H is a matrix with real numbers {s_i} all along the diagonal.
>>H is a special case of a Hermitian matrix, where the complex conjugate
>>transpose equals the original matrix. Hermitian matrices always have
>>real numbers as eigenvalues, which is another way of saying they will
>>be diagonal matrices of real numbers in some basis (its eigenvector basis).
>I am slightly confused here (!). Clearly the representation exp(iH)
>needs some explanation. The only way I can see to make sense of the
>expression exp(iH) is to consider it via an expansion
>exp(iH) = 1 + iH - HH/2! + .... etc.
>but I can't offhand see that this should put any restriction on H such
>that the complex conjugate transpose should equal H. I am obviously
>confused (not for the first time).
This expresion for exp(iH), which is absolutely correct,
makes good sense for any matrix H.
However, exp(iH) will not take the special form Paul described above,
unless H is Hermitean (complex conjugate transpose equals itself).
>>When we try to make the phase invariance _local_ (a function of spacetime
>>coordinates) is when electromagnetism is forced upon us, in a nifty sort
>>of way.
>In John's absence, you wouldn't fancy showing how U(1) pulls out EM
>would you? In a *very* elementary way. The key seems to be making things
>invariant locally, which I feel will bring in action about which I know
>essentially zero.
Here is something I wrote on an earlier occasion describing that process.
I'm afraid it *does* involve action, and a lot of calculation,
but tell me if you can make anything out of it.
Consider a charged relativistic particle (not field) without spin.
Classically, it obeys the equation H^2 = p^2 + m^2,
where H is the Hamiltonian, p is the momentum, and m is a constant (mass).
Quantizing in position space yields the equation
m^2 psi = (@/@x)^2 psi - (@/@t)^2 psi = Delta psi,
where Delta is the relativistic Laplacian, or d'Alembertian.
(Delta = div d, where div is the divergence.)
As you know, this is an unsatisfactory quantum theory
which needs to be quantized a second time.
But the theory can give us insights.
In particular, the probability current density for the particle is
j = i (psi* d psi - psi d psi*),
so the charge current density is qj = iq (psi* d psi - psi d psi*),
where q is the charge on the particle.
Now, reinterprete the equation m^2 psi = Delta psi
as an equation for a complex classical field psi.
In order to do a complex field classically,
treat the real and imaginary parts as individual real fields.
Or treat psi and psi* as independent fields.
The Euler Lagrange equations are m^2 psi = delta psi and m^2 psi* = delta psi*.
A Lagrangian density from which these equations may be derived is
L = g(d psi, d psi*) + m^2 psi* psi (the negative of the usual),
where g is the special relativistic metric tensor.
This Lagrangian is invariant under the global gauge transformation
that takes psi to psi e^(-i theta) and takes psi* to psi* e^(i theta),
for a real constant theta, and fixes spacetime.
These are global gauge transformations that form the group U(1).
The current density conserved by this group of transformations is
j = i (psi* d psi - psi d psi*),
the same j earlier associated with the charge current density.
Because of this heuristic argument, U(1) is identified with electromagnetism.
What does U(1) tell us about electromagnetism?
Make the gauge transformations local,
in keeping with the spirit of field theory.
That is, make theta from above a function of spacetime.
Now we have local gauge transformations that locally form the group U(1).
But the Lagrangian is not preserved under such a transformation!
The transformation takes L to
L + theta g(j, d theta) + theta^2 psi* psi g(d theta, d theta).
To solve this problem, suppose the existence of another field,
a vector field A coupled to the current density j with the strength q.
Specifically, let L be g(d psi, d psi*) + m^2 psi* psi - q g(j, A).
Under a gauge transformation, let A be taken to A + (theta/q) d theta.
This Lagrangian is still not invariant; L is now taken to
L - 2 q theta psi* psi g(A, d theta) - theta^2 psi* psi g(d theta, d theta).
But this can be fixed by adding a term for A itself.
Let L be g(d psi, d psi*) + m^2 psi* psi - q g(j, A) + q^2 psi* psi g(A, A).
Let D psi be d psi + i q psi A. Then L = g(D psi, D psi*) + m^2 psi* psi,
simplifying the expression. This D is the covariant derivative.
There is one more term. Let F = d A. F is gauge invariant.
So there may be a term in which A interacts with itself through g(F, F).
L = g(D psi, D psi*) + m^2 psi* psi - (1/4) g(F, F) is the complete Lagrangian.
I don't know where the 1/4 comes from,
but everything else comes naturally from making gauge invariance local.
A term proportional to M^2 g(A, A) for an EM field of mass M is not allowed,
because it is not gauge invariant. So EM must be a massless field.
The Euler Lagrange equations for this final Lagrangian are
m^2 psi + iq g(d psi, A) = delta psi + iq psi div A + q^2 psi g(A, A),
m^2 psi* + iq g(d psi*, A) = delta psi* + iq psi* div A + q^2 psi* g(A, A),
and div F = -iqj + 2q^2 psi* psi A = -qJ,
where J = i (psi* D psi - psi D psi*) is the covariant current density.
(dF = ddA = 0 and div F = -qJ are just Maxwell's equations.)
Thus, heuristics makes us believe U(1) is connected with electromagnetism.
Then establishing local U(1) gauge transformations
gives us the correct Lagrangian for a relativistic field theory
in an electromagnetic potential A. The covariant derivative D,
the covariant current density J, and the curvature form F appear also.
Quantize this field theory and you have QED for spin 0 bosons.
-- Toby
to...@ugcs.caltech.edu
It doesn't have to be a free quark. Any quark will do.
> This object is described by three parameters
Really 6 since each part of this vector in color space is complex.
>and the field by some operation on these parameters dependent
>on the path (which I presume to be 4-D).
Right
> What I am trying to grok (in
>John's absence) is the realtionship between the symmetry group and all
>of this. Is it that the 'operation' is a group operation and so converts
>the quark (parameters a,b,c) into a quark (a',b',c') where (a,b,c),
>(a',b',c') are always in the group.
The group operation is indeed the thing that gives us the quark
wavefunction at the end of the path from the original wavefunction.
But, those triples, the quark wavefunctions, are not members of the
group. The group operations have got to be things that take one
normalized complex 3-vector to another, thus 3x3 unitary matrices.
They also need to have unit determinant, although the reason why is
eluding me at the moment. Thus, we have SU(3) as our gauge group.
>Hang on a tick.
Normally, ticks hang on me,
especially after I spend the day walking in the woods.
>If I peer through the murk I get the impression that we
>have an object, here presumably a (non-existant) single free quark
>traversing a path in a strong field. This object is described by three
>parameters and the field by some operation on these parameters dependent
>on the path (which I presume to be 4-D). What I am trying to grok (in
>John's absence) is the relationship between the symmetry group and all
>of this. Is it that the 'operation' is a group operation and so converts
>the quark (parameters a,b,c) into a quark (a',b',c') where (a,b,c),
>(a',b',c') are always in the group?
The quarks do *not* form a group; they form a vector space.
(Every vector space is also a group under addition,
but that's not the symmetry group people talk about.)
The 'operation' is not an operation *on* the symmetry group;
it is the 'operation's themselves that *are* the group.
In QED, the electron's possible wave functions form a Hilbert space
(which is a kind of vector space and is not the symmetry group).
The group U(1) acts on that Hilbert space
by changing the phase of the wave functions in it.
In QCD, the quark's possible wave functions also form a Hilbert space.
Again, the symmetry group acts on that Hilbert space
by changing a wave function into another wave function
(by a method described more completely by me in another post).
The wavefunctions do not form the symmetry group,
and the field does not form the symmetry group.
Rather, the field assigns an element of the group to each path in spacetime,
and this element then acts on the wavefunction
of any particle foolish enough to travel along that path.
>I also get (with my murky peering) the impression that the quark also
>generates a field. I guess there ought to be some connection between the
>quark and *its* field. Indeed (as one does even with classical EM) one
>begins to wonder why one needs a representation of the quark as a
>something at all. This all looks like fields interacting with fields. If
>one could have some plausible reason why a field configuration just like
>a quark (or electron) should be self-stable then one could dispense with
>particles altogether. As far as I am aware nobody has yet figured this
>out, even for an electron. I guess we more-or-less have for a photon,
>though.
Everything that's been done for the photon has been done for the electron.
In full fledged QED, that is.
(And it's been done for quarks and gluons in QCD.)
But John was only doing a semiclassical approach,
and that's all I'm talking about
(which is good, since I don't understand quantum field theory very well yet).
>I am
>slightly struck by the observation that a photon seems to be an electric
>field travelling entirely in the space dimension, and an electron an
>electric field travelling in the time direction.
I don't see this observation at all.
-- Toby
to...@ugcs.caltech.edu
>Here is something I wrote on an earlier occasion describing that process.
>I'm afraid it *does* involve action, and a lot of calculation,
>but tell me if you can make anything out of it.
Well, I looked at it carefully but to be honest, my knowledge has far
too many gaps for it to make sense. I think much of the opaqueness (for
me) lies in the jargon. Now there is nothing wrong with jargon since it
expresses things accurately and clearly to those that understand it. For
those that don't, it doesn't.
I rather get the impression that U(1) thingies happen to:
[Toby:]
"Then establishing local U(1) gauge transformations
gives us the correct Lagrangian for a relativistic field theory
in an electromagnetic potential A. "
I would guess that SU(2) and SU(3) would be two others, and would I be
right is guessing that we don't know any more?
Fortunately I don't have to ever work such things out. Well, probably,
one is never quite sure of John's capability to crack the whip, he did
have me work out simple T^Mtensors a couple of times.
However I am just starting to get my poor little head round this. You
have a set of particles expressing a particular 'force'. Essentially
they all look the same except for an attribute(s), and this attribute is
varied by interactions with other particles carrying the attribute. So
there are three attributes to the strong force, expressable as a column
vector three high. Their interactions are expressed by a 3x3 thingy
which essentially expresses how thay change from one form of the
attribute to the other. This has indeed a certain elegence.
Anyway,
SU(3)xSU(2)xU(1)
So I guess SU(2) is the weak, and would SU(2)xU(1) be the electroweak?
>Of course, then you still have gravity to deal with and that's a mess
>all of its own.
Fortunate, really, since it keeps theoretical physicists in jobs.
It's funny, really, since one would imagine it to be the simplest to
describe, even simpler than the electric 'force'. I am dissapointed (and
probably not alone) that gravity is not just a onefold force (ie
attraction only), electricity a twofold (ie +ve & -ve), strong a
threefold (r, g, b) and weak oh some combination of electricity and
gravity (ahem). Ah well, perhaps I had better return to reality.
In article <5ua4j4$d...@agate.berkeley.edu>, John Baez
<ba...@math.mit.edu> writes
>In article <5ts5uk$m...@agate.berkeley.edu>,
>Oz <O...@upthorpe.demon.co.uk> wrote:
<deleted with embarrasemnt>
>Here's the explanation again! Read it and let it sink in.
<You'll break the keyboard if you hit it that hard, you know.>
<OUCH, I felt that.>
>Okay, so: move your particle around a little square in the xy plane.
>Say the sides of the square are length epsilon. Its phase gets multiplied
>by approximately
>
>exp(ik epsilon^2)
>
>where the number c
Oh dear, I hardly dare to ask. ............ (c?)
>depends on the electromagnetic field where your
>little square is. What's that number k? It's the charge of your
>particle times the Z COMPONENT OF THE MAGNETIC FIELD!
<snip>
>If we used a little square in zt plane we'd similarly get an answer
>involving the Z COMPONENT OF THE ELECTRIC FIELD.
OK, this says it. I don't disagree with it, after all it's virtually
identical with GR in form. After all the stuff with exp(ik), I assumed
you wanted something, er, different.
One has to be struck by the similarity to energy as momentum travelling
in the t-direction. The trouble is that this formulation of EM is
terribly asymmetric, and thus unsatisfactory.
Now I know (because people have mentioned it) that at some level the
magnetic field is a relativistic effect (Ok, maybe a requirement to be
Lorenz invariant or something). As a result I would hope to have a
single unitary-type expression of EM and an appropriate metric and find
that electric and magnetic fields fall out as expressions relating to
charge (although it may not be exactly what we consider as electric
charge).
Ungh, hang on, I'm being stupid (again). Ignoring magnetic moments etc a
static electric charge in a magnetic field doesn't do anything. It's
only a moving electric charge that moves, and it moves perpendicular to
both the mag. field and it's direction of motion. Aha, I remember the
little visualisation now. I've no idea how accurate or complete it is
but the idea is an electron traversing rasially across a coil sees, due
to Lorenz contraction, more charge on one side than the other, and so is
bent perpendicular to it's direction and the axial 'magnetic' field.
Does this mean we can dispense with magnetic fields?
Please sir?
>Now, I haven't explained where Maxwell's equations come from,
>but I've explained how the electromagnetic field comes out of
>U(1), which is a first step.
Well, I hope you have a more assimilatable method than Toby's. :-)
I hope you watch out for Lyme's disease, or whatever it's called.
>In QED, the electron's possible wave functions form a Hilbert space
>(which is a kind of vector space and is not the symmetry group).
>The group U(1) acts on that Hilbert space
>by changing the phase of the wave functions in it.
>In QCD, the quark's possible wave functions also form a Hilbert space.
Just very simplistically, what are the characteristics of a Hilbert
space?
Oz <O...@upthorpe.demon.co.uk> wrote:
>Toby Bartels <to...@ugcs.caltech.edu> wrote:
>>Here is something I wrote on an earlier occasion describing that process.
>>I'm afraid it *does* involve action, and a lot of calculation,
>>but tell me if you can make anything out of it.
>Well, I looked at it carefully but to be honest, my knowledge has far
>too many gaps for it to make sense. I think much of the opaqueness (for
>me) lies in the jargon. Now there is nothing wrong with jargon since it
>expresses things accurately and clearly to those that understand it. For
>those that don't, it doesn't.
Then you just need to learn all the jargon! -- a daunting task.
Maybe John will come back and walk you through it;
I don't think I'm up to it.
Better yet, maybe John will come back
and explain it in a simpler, intuitive manner.
Then I will have a better grasp of it too.
>I rather get the impression that U(1) thingies happen to:
>>Then establishing local U(1) gauge transformations
>>gives us the correct Lagrangian for a relativistic field theory
>>in an electromagnetic potential A.
>I would guess that SU(2) and SU(3) would be two others, and would I be
>right is guessing that we don't know any more?
I don't think you can describe the weak force on its own with SU(2);
you have to use U(1)xSU(2) to describe the electroweak force.
SU(3) independently describes the strong force.
And, yes, the electroweak and strong forms of Maxwell's equations
pop out when you assume local gauge invariance under the various groups.
And, no, we don't know any more forces -- except gravity, of course.
-- Toby
to...@ugcs.caltech.edu
> of this. Is it that the 'operation' is a group operation and so converts
> the quark (parameters a,b,c) into a quark (a',b',c') where (a,b,c),
> (a',b',c') are always in the group.
If I may, I think the first part of your sentence is true but not the
2nd, the operations you perform on a vector form a group but the vector
themselves (a,b,c) are just in C^3, (it's true it's a group also but...)
> one
> begins to wonder why one needs a representation of the quark as a
> something at all. This all looks like fields interacting with fields. If
That the quarks are a representation of SU(3) just means that a gluon
can act on a quark. They are just done for it cause actually you begin
with SU(3) because you suspect such a symmetry group and then construct
the quarks space as a representation of the group. A state space is,
tautologicaly, a representation of the symmetry group.
Just as in all physics, I guess you can look at (a,b,c) in an intrinsic
way, without making reference to a basis of C^3, but if you want to make
calculus, you have to choose a basis for quarks. Is it what you mean by
saying fields interacting with fields ?
But I think here we are not yet at the stage of fields, we are at the
level of classical quantum mechanics, not field theory, I think we have
to 2nd quantify the stuff to get fields (am I right big brothers ?). In
fact we were in a *semi-classical* case because _gauge_fields (not state
fields) are considered, the one you integrate over a path to get an
element of the gauge group, but we made it act on a single hilbert
space, that is, what would be a fiber of an actual state field (is that
it ?).
Christian Mercat
Oz <O...@upthorpe.demon.co.uk> wrote at the end:
>Just very simplistically, what are the characteristics of a Hilbert
>space?
Do you know what a vector space is?
Well, I'll tell you anyway.
Vectors are things you can add, subtract, and multiply by scalars.
These operations have to satisfy axioms:
addition is associative and commutative,
subtraction is the opposite of addition,
and scalar multiplication is linear and associative.
An inner product space is a vector space with an inner product.
An inner product is a way to multiply two vectors to get a scalar.
The inner product of v and w is written "<v,w>" (or other ways).
The inner product is required to be *symmetrically sesquilinear*.
This means: it distributes over addition,
<v,cw> = c<v,w> for a scalar c,
and <v,w> is the conjugate of <w,v>.
Now is a good time to talk about what the scalars can be.
You could make the scalars real numbers,
which is very easy since each real number is its own conjugate.
You could make the scalars complex numbers,
which is what we do in quantum mechanics.
You could make the scalars belong to any field you like,
as long as you define some notion of conjugate for that field.
But everything I say below works only with the real or complex numbers.
Now, we want our inner product to be *positive definite*,
which most people assume when they say "inner product" anyway.
The positive part of this means that <v,v> is always a nonnegative real number
(even when the scalars are, in general, complex numbers).
The definite part means that <v,v> is never zero unless v is zero.
Put together, positive definite means <v,v> is always a positive real number,
unless v is zero (in which case <v,v> must be zero, by sesquilinearity).
So put this rule in as another axiom.
The square root of <v,v> is called "norm", written "||v||".
Now we have covered all the algebraic properties of a Hilbert space.
But a positive definite inner product space also has topological properties.
We can define the distance between v and w to be the norm of v-w.
See how positive definiteness fits into this:
you can't take the square root and get a real number without positivity.
And definiteness guarantees that the distance between v and w
won't be zero unless v and w are the same vector.
Now that we have a distance function, we can consider limits.
The sequence {v_0,v_1,v_2,...} converges to v_infinity
if the distance between v_i and v_infinity converges to zero.
Of course, not every sequence converges.
But certain sequences, the Cauchy sequences,
look like they really *should* converge.
A sequence is a Cauchy sequence if the distance between v_i and v_{i+1}
converges to zero.
So here's the final requirement for a Hilbert space:
every Cauchy sequence must converge.
Here is an example: the space of colour states for an antiquark.
As you saw in other posts, these states are 3D row vectors.
I don't know if anyone told you then,
but these are *complex* vectors,
so they need to form a complex Hilbert space
(which is what we use in QM, as I said).
I hope it's obvious how to add and subtract these and multiply them by scalars.
Now we need to define the inner product.
Let <(a,b,c),(d,e,f)> be a* d + b* e + c* f,
where * is the complex conjugate.
Check that this satisfies the definition of inner product.
So one question remains: is this inner product space complete?
Well, every finite dimensional inner product space is complete,
because the complex numbers (and the real numbers) are complete.
So we have a Hilbert space.
-- Toby
to...@ugcs.caltech.edu
Oz wrote:
>I am slightly confused here (!). Clearly the representation exp(iH)
>needs some explanation. The only way I can see to make sense of the
>expression exp(iH) is to consider it via an expansion
>
>exp(iH) = 1 + iH - HH/2! + .... etc.
Correct! Now, if H is a diagonal matrix of real numbers "s_j", where
"j" labels the row (and column!), this is just a diagonal matrix of
"exp(i s_j)"... try it and see. If it is instead
H = T diag{ s_j } T^-1
then exp(i H) = T diag{ exp s_j } T^-1
where T is the matrix which diagonalizes H (this can always be done for
a Hermitian matrix). T represents a transformation of bases on the vector
space which H, and U = exp(i H), act upon. This is why I said "a unitary
matrix just changes components' phases in some basis, and also mixes them
in an arbitrary basis", or something like that.
>
>but I can't offhand see that this should put any restriction on H such
>that the complex conjugate transpose should equal H.
It doesn't! This just followed from having a diagonal matrix of real
numbers, as a fancy generalization. The above is the definition of a
Hermitian matrix, as you noted below. The restriction is that exp(i H)
won't be unitary if H isn't Hermitian.
>>Now you can see what U(n) is. It's just the matrices of the form
>>
>> exp (i H)
>>
>>where H is an (n x n) Hermitian matrix.
>UU' = exp (iH) exp (-iH') and aha the criterion for a hermetian matrix
>must therefore be that H' = H. Hmmm, this seems somewhat circular. I
>suppose I was expecting to find some reason for the requirement for a
>Hermitian conjugate. I am sure it's lurking here somewhere, but where?
I like your notation for this: H' means "H complex conjugate transpose,"
aka "H Hermitian conjugate."
The requirement for needing Hermitian conjugates comes from quantum
mechanics: wavefunctions live in a complex normed vector space, where
"bra" vectors are dual to "ket" vectors, in Dirac's notation. Since
"bra" vectors are the Hermitian conjugate of "ket" vectors, if we
multiply psi by H (on the left), we are multiplying psi' by H' (on the
right). A unitary matrix U keeps the form psi' psi invariant, since
psi' U' U psi = psi' psi, since U'U = I
For 1-dimensional vectors, psi' = psi*, as you noted, and you get elements
of U(1) for your unitary matrices, which are phases! You can think of
multiplication by a phase as a rotation between real and imaginary parts
of a complex number, which preserves the angle between the real and imaginary
axis. (So you can also think of U(1) as SO(2), if that makes sense.)
Just as orthogonal matrices rotate real vectors' axes in a nice way, unitary
matrices rotate complex vectors' axes in a nice way.
>In John's absence, you wouldn't fancy showing how U(1) pulls out EM
>would you? In a *very* elementary way. The key seems to be making things
>invariant locally, which I feel will bring in action about which I know
>essentially zero.
Yikes! A comprehensive demonstration would take a lot of equations, and
probably introduce plenty of new concepts. I can try to sketch the basics,
however...
I will presume you're familiar with the canonical prescription for
including electromagnetism in ordinary mechanics: replace the momentum
p in your free-particle equation by (p + e A/c), where A is the vector
potential, and both p and A are vectors. If this is new, see the
Feynman lectures, volume 3, or just about any mechanics or electromagnetics
book. What local gauge invariance does is motivate this prescription, from
a symmetry of the equations, and finally provide a geometric interpretation
of the forces which arise, just as general relativity is a geometric theory
of gravity.
Let's start with Schrodinger's equation, to keep things simple (and keep
away from action). Suppose we have
E psi = - (1/2m) d^2/dx^2 psi
for a free particle (where hbar = 1 in these units). The operator
-i d/dx represents momentum (p) in the x-direction. If we replace psi
by exp(i s) psi, where s is a real number, we get the same equation.
The possible values of s form a continuum, and thus are a continuous
symmetry of the equation.
If we let s be a _function_ of x, then the derivative
operator also operates on s, and d/dx pulls down an extra term, i ds/dx,
when acting upon [exp(i s) psi]. Thus, the free-particle Sch. equation
isn't invariant under a _local_ phase change.
However, if we replace p by (p + e A/c) like the classical prescription,
we are replacing (d/dx) by (d/dx + i e A/c) in the operators, and
Schrodinger's equation with the (magnetic) Lorentz force is
E psi = - (1/2m) [d^2/dx^2 + 2i (eA/c) d/dx + i(e/c) dA/dx - (eA/c)^2] psi
We can exploit the (presumably familiar) gauge freedom of electrodynamics:
the same E and B fields are produced when A is replaced by A + grad chi,
where chi is an arbitrary function. If we simultaneously change our gauge
of A when we change the phase of psi, we can recover the original equation
and restore "local gauge symmetry": put chi = - c s /e, and the prescription
psi -> exp (i s) psi
A -> A - (c/e) (d/dx) s
leaves the "electromagnetic Schrodinger equation" invariant, where s is
a function of x. Try it: nine extra terms pop up, and they all cancel!
So, the Sch. equation of a free particle is invariant under global change
of phase, but not local. Including electrodynamics restores the symmetry
when the phase change is local, provided we change the gauge of A with
the above prescription.
What's going on here? We are placing a complex number, the wavefunction,
at every point in space and time. Our equations of motion involve taking
derivatives, which means comparing the complex number at nearby points.
However, we have no guarantee that the "complex axes" at one point have
anything to do with the axes at nearby points, so when we compare the
function at points X and Y, we must allow for the possibility that the
axes themselves (the basis) has "rotated" when moving from Y to X. This
sounds like a lot of trouble, but the math exists which tells us how
to do it. The result is that we replace (d/dx) with (d/dx + i e A/c), and
we call this the "covariant derivative." The reason is that, under a change
of basis, the "covariant derivative" of psi changes the exact same way psi
does. The extra bit, (i e A/c) is called the "U(1) connection," where
"connection" connects the basis at one point with another. For those who've
seen general relativity, this oughta ring a bell: the connection is how we
relate vectors at different points in a curved space, where it's impossible
to line things up globally.
If you were less afraid of action, we could've started with a Lagrangian
formulation of mechanics. Then it's possible to see how the extra bit
MUST have something to do with electrodynamics, since the extra terms
ds/dx couple to the electromagnetic current the same way A does. As a
bonus, we'd also get (with a bit more trouble) the inhomogeneous Maxwell
equations (not just the Lorentz force), from the Lagrange equations of
motion. Using the Jacobi identity (if that rings a bell) on our "covariant
derivative" vector fields gives the other two (homogeneous) Maxwell equations.
Other bonuses: you could see that psi' would describe particles of the
opposite charge, as the covariant derivative for psi' has the opposite
sign for the connection (try this by taking the Hermitian conjugate of
the Schrodinger equation; it's not too bad!). Notice also that the connection
must have 4 spacetime components relativistically, so we're describing
a spin-1 field with the connection always (including photons, W+- and Z
bosons, and gluons). In a Lagrangian formulation, it's also easy to see
that the "gauge bosons" must be massless, as a mass term destroys the
local gauge invariance again. (Including the masses for the electroweak
bosons involves some complicated mathematical gymnastics: Higgs boson
physics. Hmmm, maybe the masslessness isn't a bonus after all...)
Hope this isn't too opaque!
Happy physics,
- Paul
SU(2)xU(1) is indeed the electroweak. However, as John (I think)
mentioned in another post, the U(1) of EM lies in the SU(2)xU(1) of
electroweak in a skewed manner. The weak interaction is not
expressible on its own as an SU(2) gauge theory. One of the reasons
why is that the bosons mediating the weak force are massive which
would kill gauge invariance if you tried to put the weak interaction
by itself. Electroweak unification lets you throw the weak and EM
interactions together into a single gauge theory with the gauge group
SU(2)xU(1). This symmetry is, however, spontaneously broken which lets
the Ws and Z get masses. Unfortunately, I think the explanation of
just what that means and how it works will have to wait for another
day.
>>Of course, then you still have gravity to deal with and that's a mess
>>all of its own.
>
>It's funny, really, since one would imagine it to be the simplest to
>describe, even simpler than the electric 'force'.
Unfortunately, quantizing gravity in the manner that all the other
forces are quantized requires a tensor gauge particle (the graviton)
and has all sorts of problems with renormalization and the such.
In article <5uh8n7$r...@agate.berkeley.edu>, Toby Bartels
<to...@ugcs.caltech.edu> writes
>Then you just need to learn all the jargon! -- a daunting task.
I think 'learn' is inappropriate, 'understand' would be better but
completely impractical as far as I am concerned. My aim is far less,
perhaps 'to get some feel' is as far as is reasonable.
>Maybe John will come back and walk you through it;
>I don't think I'm up to it.
>Better yet, maybe John will come back
>and explain it in a simpler, intuitive manner.
>Then I will have a better grasp of it too.
I think you overestimate what I have in mind. I don't look for a
rigorous mathematical proof, that is for others better trained than I. I
am looking for a rational and the concepts. To get this a certain amount
of the maths at a very simple level is required, notice how JB makes me
do some simple (very, very, very simple) examples so that I have at
least a minimum idea of what processes are involved. Then he explains
some concepts, and probably makes me work again. The trick is probably
to realise how simple the examples can be to give enough feel.
In short, I think you CAN do it.
>I don't think you can describe the weak force on its own with SU(2);
>you have to use U(1)xSU(2) to describe the electroweak force.
>SU(3) independently describes the strong force.
>And, yes, the electroweak and strong forms of Maxwell's equations
>pop out when you assume local gauge invariance under the various groups.
>And, no, we don't know any more forces -- except gravity, of course.
No, that wasn't my question. My question is could we make up new (but
fictitious) 'forces' using other groups that would not end up being non-
local or breaking some other basic 'law'. In other words are these
groups unique in some way. This isn't very clear, but I hope you
understand.
In article <340CA530...@urania.nascom.nasa.gov>, Craig DeForest
<junkma...@urania.nascom.nasa.gov> writes
>
>Seen in that way, things like Fourier transformation, Taylor
>expansion, and a whole lot of other useful tools turn out to be
>mere examples of co-ordinate transformations in Hilbert space,
>rather than separate things in their own right.
Eh? This is a major statement of deep import. Would you like to give
some illustrations?
Are all (square?) matrices with real eigenvalues Hermitian?
What's an (I hardly dare ask) eigenvalue?
[NB At the height of the BSE epidemic, when sheep were being blamed as
the cause of the encephalopathy, there was a nice cartoon in a paper of
two sheep standing side by side and one with a pensive expression on
it's face is asking the other "What's a brain?". Anyone who has had any
contact with sheep will understand this. It's about how I feel asking
the question.]
--
'Oz "Is it better to seem ignorant and learn,
- or seem wise and stay ignorant?"
[Moderator's note: I *think* that a square diagonalizable matrix, *all* of
whose eigenvalues are real, must be Hermitian. But this is based on a
familiarity with physicists' mathematics, and I may be forgetting some sort
of special case. Christian Mercat's post on eigenvalues elsewhere in this
thread explains what an eigenvalue is. -MM]
Some Hilbert spaces are infinite dimensional, other aren't, every
regular finite dimensional vector space, provided you give it an inner
product is also a Hilbert space.
Some are countably infinite dimensional, for example the vector space of
finite sequences (sequences that are zero after a N, but this N changes
from one to another), some other are not, for example the Hilbert space
of the square summable sequences, (v_n) is L^2 (it's the symbol for
square summable) if the sum from k=0 to n of the |v_k|^2 converges when
n goes to infinity. You can not express a particular vector as a
_finite_ sum of vectors from a countably infinite basis (tricky huh
?)...
So there are always ugly compactifications to do to say properly that
Fourier expansion is indeed a coordinate transformation from a basis to
another, you first have to restrict yourself to a countably finite
basis, then take the completion of this to have the Hilbert space you
are really interested in. But, I forgot, I am on spr, not s.math.r, so
these subtilites are not important.
Christian Mercat
An operator A has lambda as and eigenvalue iff there exist a vector
v_lambda (an eigenvector) such that
A(v_lambda)=lambda v_lambda, that is A acts on v_lambda just as a
dilatation by a factor lambda. If you know all the eigenvectors of A,
then you can make a change in the coordinates to the basis made of all
these eigenvectors, then, A looks like
(lambda_1 )
A=( lambda_2 )
( ... )
( lambda_n)
that is a diagonal matrix. Then it's easy to compute things such as
exp(A) and all the series on A because
(lambda_1^k )
A^k=( lambda_2^k )
( ... )
( lambda_n^k)
so, for example, summing up exp A=sum A^k/k! we get
(exp(lambda_1) )
exp(A)=( exp(lambda_2) )
( ... )
( exp(lambda_n))
What is usefull in physics is that sometimes you don't have to know all
the eigenvectors to solve a problem, you just have to know the largest
eigenvalue lambda_m and its eigenvector v_m to predict the behaviour of
A^k when k grows to infinity, because A^k will typically behave like
lambda^k times the projection on v_m. Then you look one after the other
to the next eigenvalues and eigenvectors, taking them into account one
by one. It is typically what happens when A=exp(iH) or exp(-H/kt), the
evolution matrix of a quantum or statistical problem and you want to
know A^t=exp(itH), the evolution after t seconds, first find the ground
state (the vaccuum), then treat all the others as perturbation of this
state, from the lowest energy to the highest (and hope that everything
converge !).
Now, to find eigenvalues, you must resolve the problem
Av=lambda v, in lambda and in v. But this means that
(A-lambda Id) v=0 where Id is the identity matrix. So its determinant is
zero, so to find an eigenvalue just means looking for the solutions of
det(A-lambda Id)=0 and this turns out to be a polynomial equation in
lambda, that is (with X replacing lambda) something like
a_n X^n+...+a_1 X+ a_0 = 0
Now, for a hermitian matrix, suppose H^\dag=H and v is an eigenvector
associated to the eigenvalue lambda,
Hv=lambda v.
Recall that the scalar product <u,v> can be expressed as
<u,v>= u^dag v and especially v^dag v=<v,v>=||v||^2
So taking the norm squared of Hv=lambda v, we get
. ||Hv||^2=v^dagH^dag H v= |lambda|^2 ||v||^2
What the hermiticity tells you is that the H^dag H in between equals H^2
so you get
v^dagH^dag H v= v^dag HH v
but since Hv= lambda v, you have
. ||Hv||^2=v^dag H lambda v= lambda v^dag H v= lambda v^dag lambda v
= lambda ^2 v^dag v= lambda^2 ||v||^2
But this should be equal to |lambda|^2 ||v||^2, so
you get |lambda|^2=lambda^2, so (simplifying by lambda)
lambda*=lambda
the complex conjugate of lambda is equal to lambda, lambda is a real
number.
_all the eigenvalues of a hermitian operator are real_
Christian Mercat
Oh, dear. I'm missing the reference. Now I'm going to have to go
home and search through all my Floyd albums until I figure it out.
>>The group operations have got to be things that take one
>>normalized complex 3-vector to another, thus 3x3 unitary matrices.
>>They also need to have unit determinant, although the reason why is
>>eluding me at the moment.
>
>Didn't John make me work out that for TT* = I, then T had to have
>determinat = 1? No, that isn't right.
Yeah, not quite. if T is unitary (TT* = I) then |det| = 1, but not
necessarily det = 1.
>>Thus, we have SU(3) as our gauge group.
>
>Well, right at the top of this thread, John made me hunt out what SU
>meant. I can't find it, a few got expired before I set it on keep. Now
>the U bit means unitary and the S bit means, dunno, was it spherical?
Special, actually
>Ah, yes, was that it? That's why the determinant had to be one, to give
>a 'magnitude' of one for the 'S' bit? Now, let me think. SU(2) was a 2x2
>complex matrix and described four dimensions, so SU(3) is a 3x3 complex
>matrix and describes six dimensions.
SU(2) is 3 dimensional (remember you (with a little help from John)
figured out the explicit form for these matrices). SU(3) is actually
8 dimensional. This is a little tougher to show, but doable.
>An operator A has lambda as and eigenvalue iff there exist a vector
>v_lambda (an eigenvector) such that
>A(v_lambda)=lambda v_lambda, that is A acts on v_lambda just as a
>dilatation by a factor lambda. If you know all the eigenvectors of A,
>then you can make a change in the coordinates to the basis made of all
>these eigenvectors, then, A looks like
> (lambda_1 )
>A=( lambda_2 )
> ( ... )
> ( lambda_n)
>that is a diagonal matrix. Then it's easy to compute things such as
>exp(A) and all the series on A because
So it's what Paul had when he said:
=======
Paul:
Now, if H is a diagonal matrix of real numbers "s_j", where
"j" labels the row (and column!), this is just a diagonal matrix of
"exp(i s_j)"... try it and see. If it is instead
H = T diag{ s_j } T^-1
then exp(i H) = T diag{ exp s_j } T^-1
where T is the matrix which diagonalizes H (this can always be done for
a Hermitian matrix).
End Paul
================
Here T is a matrix of the eigenvectors or eigenvalues or at least
closely related.
>Now, to find eigenvalues, you must resolve the problem
>Av=lambda v, in lambda and in v. But this means that
>(A-lambda Id) v=0 where Id is the identity matrix. So its determinant is
>zero, so to find an eigenvalue just means looking for the solutions of
>det(A-lambda Id)=0 and this turns out to be a polynomial equation in
>lambda, that is (with X replacing lambda) something like
>a_n X^n+...+a_1 X+ a_0 = 0
Right. The tedious bit. OTOH don't you get determinants by doing
something very similar to the hermitian complex conjugate?
>Now, for a hermitian matrix, suppose H^\dag=H and v is an eigenvector
>associated to the eigenvalue lambda,
>Hv=lambda v.
>Recall that the scalar product <u,v> can be expressed as
><u,v>= u^dag v and especially v^dag v=<v,v>=||v||^2
OK, although 'recall' is a bit strong.
>So taking the norm squared of Hv=lambda v, we get
>. ||Hv||^2=v^dagH^dag H v= |lambda|^2 ||v||^2
<Snip to placate irascible, but very efficient, moderator :-) >
>But this should be equal to |lambda|^2 ||v||^2, so
>you get |lambda|^2=lambda^2, so (simplifying by lambda)
>
>lambda*=lambda
>
>the complex conjugate of lambda is equal to lambda, lambda is a real
>number.
>_all the eigenvalues of a hermitian operator are real_
OK, now is this the real reason for the use of hermitian complex
conjugates, and unitariness in general. In other words that you really
do want all eigenvalues to be real? The little mechanical trick to
obtain the hermitian complex conjugate is merely a 'cheat' way to handle
all that messy math above?
So then the question is (which I think has probably been answered
somewhere on this thread, but I can't find it) what is the reason for
the need for all the eigenvalues to be real?
That, then, has answered one question. Why H=H' is a requirement.
Phew!
>Now, I forget, was this going to move on to showing why Hermitian
>matrices, in particular the complex transpose, comes about as a
>requirement?
Let's look at the concept of a linear operator.
If V and W are vector spaces, you can have a function f from V to W.
If f(av+w) = a f(v) + f(w) for any scalar a and vectors v and w in V,
the f is a linear operator.
Sometimes people use the phrase "linear operator" only when W = V.
That's what I'll do since that's the case I care about now.
What if V is C^3, the space of row vectors we were using last time?
Then f can be given by a 3 x 3 complex matrix.
Linear operators have to do with vector spaces.
But Hilbert spaces are more than that; they have inner products.
If a linear operator satisfies <f(v),f(w)> = <v,w>,
then we call that operator "unitary".
Under what conditions is a 3 x 3 complex matrix unitary?
You can have fun checking for yourself that it's just what John Baez told you:
if the matrix's inverse equals its complex transpose.
(Well, you don't have to check it. You can take my word for it.
John would want you to check it. But, hell, I'm no slave driver.)
OK, now put that idea on hold for a moment and look at this one.
If f is a linear operator, you can do f and then do f again to get f^2.
And if you don't do anything at all, that's the identity operator, so f^0 = 1.
Then you can form the infinite series f^0/0! + f^1/1! + f^2/2! + f^3/3! + ...,
which we call "e^f" for short.
That is to say, e^f (x) = x/0! + f(x)/1! + f(f(x))/2! + f(f(f(x)))/3! + ....
Now we move in for the kill.
What kind of f does it take for e^f to be unitary?
Answer: f must be antiHermitean.
That means that <f(x),y> = -<x,f(y)>.
You can have some more fun and show that, in the case of C^3,
this means the matrix is the negative of its conjugate transpose.
Anyway, if f is antiHermitean, then if is Hermitean,
meaning that <if(x),y> = <x,if(y)>.
(I mean i times f, not the word "if".)
Now, maybe you're wondering to yourself: Why is this true?
Why, if f is antiHermitean, is e^f unitary?
Why, if e^f is unitary, must f be antiHermitean?
Well, I forget. But I'm sure you can figure it out.
-- Toby
to...@ugcs.caltech.edu
Craig DeForest <junkma...@urania.nascom.nasa.gov> wrote:
>The "standard Freshman answer" is that a Hilbert space is a space
>of functions mapping (say) R -> R. In some sense that I hope
>Baez will clarify for the rest of us, Hilbert space is infinite-
>dimensional, because you can (very loosely speaking to
>avoid countability problems) treat an R->R function as an infinitely
>long list of values -- which looks a lot like an infinitely tall column
>vector.
Well, that's an *example* of a Hilbert space.
More precisely, L^2(R) is the set of all functions f
such that the integral from -infinity to infinity of |f|^2 is finite.
The inner product of two such functions f and g
is the integral from -infinity to infinity of f* g.
Oz can have fun checking that this is really a Hilbert space,
according to the definition I gave in another post.
(O, aren't I cruel! But it's not hard, except that sequence stuff.)
Now, how about that infinitely tall column vector?
Well, a while back in the Photons, shmotons.... thread,
Michael Weiss, John Baez, and I talked about some functions H_n in L^2(R),
which happened to be the eigenstates of the harmonic oscillator,
and every function in L^2(R) can be written as an infinite sum of the form
f = (sum over n = 0 to infinity) f_n H_n for some complex numbers f_n.
Now, I'm not sure what these functions are
(they're related to the Hermite polynomials,
which is why I called them "H", but I really don't want to talk about it),
but the point is that they exist.
This means any function in L^2(R) can be identified
by taking the coefficients f_n and putting them in an infinitely long vector.
-- Toby
to...@ugcs.caltech.edu
>Some Hilbert spaces
>are countably infinite dimensional, for example the vector space of
>finite sequences (sequences that are zero after a N, but this N changes
>from one to another), some other are not, for example the Hilbert space
>of the square summable sequences, (v_n) is L^2 (it's the symbol for
>square summable) if the sum from k=0 to n of the |v_k|^2 converges when
>n goes to infinity. You can not express a particular vector as a
>_finite_ sum of vectors from a countably infinite basis (tricky huh
>?)...
When you deal with vector spaces,
you have (in general) no notion of convergent series
and therefore no way to evaluate an infinit sum.
Thus, every vector has to be expressible as a finite linear combination
of vectors from B if B is to be a basis of the vector space.
However, Hilbert spaces have infinite sums.
So it might be that every vector is expressible
as an infinite linear combination of vectors from B.
Then we are generous and let B be considered a basis.
Thus, L^2(R) is of countable dimension
when considered as a Hilbert space,
but not when considered as a vector space.
-- Toby
to...@ugcs.caltech.edu
>In a general way that would be suitable for a moron struggling through
>this mire, could a Cauchy sequence simply be considered as 'well
>behaved'? Please?
Yes, you could ... or, if you just want to avoid thinking about epsilon,
a sequence is Cauchy if the distance between v_i and v_j nears 0
as i and j both go to infinity.
-- Toby
to...@ugcs.caltech.edu
Oz <O...@upthorpe.demon.co.uk> wrote:
>OK, so the condition that:
>>>>Now imagine an (n x n) matrix which is
>>>>diagonal for now, with entries
>>>> diag{ exp(i s_1), exp(i s_2),... exp(i s_n) }
>>>>where again, each s_i is a real number. This can be written as
>>>> exp (i H)
>>>>where H is a matrix with real numbers {s_i} all along the diagonal.
>H exists is that H must be Hermitian.
Sure. This is really easy to see.
All along the diagonal are real numbers, and off the diagonal is only zero.
Now take the transpose -- it doesn't change!
Now take the conjugate -- it doesn't change!
So the matrix equals its own conjugate transpose.
Therefore, it is Hermitean (or Hermitian, whatever).
>Or is it to do with the statement
>(inadvertantly deleted) that it should have real eigenvalues? Now I
>don't remember anything about eigenvalues except that they were
>(1) Important.
>(2) Tedious to work out.
Remember this much about them:
If a matrix is diagonal, its eigenvalues are the elements along the diagonal.
So obviously this matrix H must have real eigenvalues,
because, hey!, that's what's along the diagonal.
Anyway, here's what eigenvalues are:
If M is a matrix and a is a number and v is a vector,
and if Mv = av, then a is an eigenvalue of M and v is an eigenvector of M.
Now, do you remember all that stuff about Hilbert spaces?
Consider this: a*<v,v> = <av,v> = <Mv,v> = <v,Mv> = <v,av> = a<v,v>.
Therefore, a* = a. In other words, all of M's eigenvalues are real.
Isn't that nice?
(If <Mv,v> = <v,Mv> confuses you,
wait until you see the post from me
where I define unitary and Hermitean operators
in terms of the inner product.)
-- Toby
to...@ugcs.caltech.edu
I do thank Michael Weiss for his example and I'd like to add mine, which
I found very enlightning, I ould have written it down yesterday but I
had to go back to work (today also but let's have a break) :
It's about tests, inquiries, when you have to fill up questions with a
cross in a box and then someone says "americans think that..." how does
he do that ? How can you extract the "average" answers, the 2nd group
and so on ? It's all about eigenvalues :
1st, work on you quizz so that a particular answer can be translated
into a column of 0 or 1 only (easy, a cross=1, no cross=0), say of
height 100.
Then, an entire enquiry (2000 tests) will give you M a 100x2000 matrix.
Consider A=MM* M multiplied by its transpose. It's a 100x100 matrix.
First remark that A*=(MM*)*= M** M*=MM*=A so that A is symmetric,
furthermore, it's a real matrix so A*=A^dag, so it's an hermitian matrix
(what a big word), so its eigenvalues are real.
Diagonalize it ! Find its eigenvalues and eigenvectors, class them from
the biggest to the smallest eigenvalue.
What is a particular eigenvector ? Just a column of real numbers. But if
you normalize it, you may interpret it as a probability to make a cross
in every box, it's a way to answer to the questions, a strategy. It
represents a population of answers that are "alike", and the selection
of the criterion that tells us they are alike is done by the
diagonalization.
What represents the eigenvalue ? It is the size of the population you
can connect to this strategy among the inquiry.
Then, you take the 3 first eigenvalues a, b, c, if you have a particular
vector, you can decompose it onto the basis of the eigenvectors of A.
Take into account just the 1st three coordinates, normalize them and
interpret it as x% a y% b and z% c. You may plot a point in 3D space
for this vector. Do it for all the 1000 tests you have, you will get a
cloud of points, but you can distinct shapes, the most important is
along the a axis, the second one along the b axis, the thirs along the c
axis but you will be able to make out the others eigenvalues also, and
you may mark them as being alike. Then you give the eigenvectors v_a,
v_b, v_c (which are 3 strategies of answers) to a sociologist and tell
her what she can say about a guy that would answer this way, she will
call a "agressive", b "prudent" and c "dreamer" for example, then you
can give your inquiry to the press, sign articles, go and chat on TV, be
rich and famous, all because you know eigenvalues of a certain hermitian
matrix.
Christian Mercat
[Moderator's note: I think we've drifted about as far from physics
as we ought to. -TB]
> Here T is a matrix of the eigenvectors or eigenvalues or at least
> closely related.
Yes, T is exactly the matrix whose columns are the coordinates of each
eigenvector in the canonical basis.
> OTOH don't you get determinants by doing
> something very similar to the hermitian complex conjugate?
Nop, you just expand the determinant as you like, for example by the
first row, I explain you how to recursively calculate an nxn
determinant.
Begin with a 1x1 matrix, just a number. Its determinant is... this
number.
Now, you have A an nxn matrix. Suppose you know how to calculate the
determinant of a (n-1)x(n-1) matrix. Let's define the determinant for
nxn matrices from them :
Let (a11,a12,..., a1n) be the first row of A.
then define det A=a11 det A11 - a12 det A12 +...+(-1)^k a1k det A1k+
...+(-1)^n a1n det A1n
where A1k is the (n-1)x(n-1) matrix got from A by deleting the kth
column and the 1st row.
For example, det(a b)= ax det (d) - bxdet(c)= ad-bc, it's the 1st step
(c d) of the definition
> >Recall that the scalar product <u,v> can be expressed as
> ><u,v>= u^dag v and especially v^dag v=<v,v>=||v||^2
> OK, although 'recall' is a bit strong.
Hey, it has been said just 2 or 3 posts before !
> OK, now is this the real reason for the use of hermitian complex
> conjugates, and unitariness in general. In other words that you really
> do want all eigenvalues to be real? The little mechanical trick to
> obtain the hermitian complex conjugate is merely a 'cheat' way to handle
> all that messy math above?
Actually, I really don't have a clear overview of this point.
1. There are usefull operators that are not unitary but they are then
unobservable, tjust artifacts of the way we abstract reality in
mathematical objects. What I know is that _observations_ of a process
are eigenvalues of the operator that is supposed to describe the
experiment that you realize in your lab. A particular physical state is
a superposition of its eigenvectors but when you perform the
observation, the noise of your macroscopic tool projects the state in a
particular eigenvector of the operator you are observing and you get the
eigenvalue. What you measure is always real quantities so the
eigenvalues are real. This is tautological and not an explanation.
2. If you normalize a state vector as to be of norm=1, then a unitary
operator acting on it will give back a normalized vector. But SO(n)
would do the trick...
3. I heard that for reasons regarding positivity of the energy and the
reflection principle, all the observables need to be unitary but it's
greek for me. I'd like to understand.