anyway, about 2 decades ago i have been thinking along some of the same
lines as Planck (although ignorant that he had been thinking about this 80
years previously) about a "Universal" set of physical units that would
basically make the Universal Constants equal to unity if expressed in terms
of those units. of course i come up with the Planck Units (except i also
came up with a unit charge not equal to the electron charge).
the governing basic formulae are:
c = L/T
E = M * c^2
E = Hbar * (1/T)
E = G * M^2 / L ( or F = G * M^2 / L^2 with E = F*L )
E = k * Q^2 / L ( or F = k * Q^2 / L^2 with E = F*L )
since k = 1/(4*pi*e0) and c^2 = 1/(e0*u0) and u0 has a defined value (in
SI it's 4*pi * 10^-7 because an ampere was defined as a current that would
yield a force of 10^-7 Nt/m for a pair of infinitely long wires spaced 1
meter apart), anyway, in that case the last equation (Coulomb static force)
becomes
E = (c^2 * u0/(4*pi)) * Q^2 / L
now if we leave off the question of unit charge for the time being and solve
the first four equations to get the four unknowns (L, M, T, E), you get the
Planck values which is what you decided to pick for your fundamental units.
L = sqrt( Hbar * G / c^3 )
M = sqrt( Hbar * c / G )
T = sqrt( Hbar * G / c^5 )
( and consequently E = sqrt( Hbar * c^5 / G ) )
from my POV, the compelling reason to pick those for our fundamental
universal units is that the other fundamental constants c, Hbar, and G all
become unity in terms of those units.
now here's what i thought was a bit interesting coming from a different POV
than most everyone else: rather than just define the unit charge to be the
electronic charge (e), i wanted the Coulomb electrostatic force constant to
be unity also and chose the unit charge to make that happen:
Q = sqrt( (4*pi/u0) * Hbar / c )
which comes out to be 11.70623764 * e, or the square root of alpha^-1 times
the electronic charge where alpha is the Fine-Structure Constant.
Q/e = sqrt( (4*pi/u0) * Hbar / c )/e = 1/sqrt(alpha)
==> alpha = u0/(4*pi) * e^2 * c / Hbar .
anyway, i thought that this was kinda neat that this relationship between
what i would call the natural unit of charge to the electronic charge (both
very fundamental and universal quantities) would be related this way.
likewise if you choose e to be your fundamental unit (as do most) then you
do have to have a non-unity Coulomb force constant which is 1/alpha (about
137.036), but that isn't so ugly either. it's just a question of which unit
is "more" fundamental and that's a toss up i guess.
Now, finally, it seems that we must perceive reality in terms of the Planck
Units (T, L, M) and perhaps the unit charge Q. if the Planck Length went
from 10^-35 to 10^-32, then we would be 2000 meters tall instead of 2 but
our meter stick would be 1000 meters and we would still call it a "meter"
and the Planck Length would still be about 10^-35. same with time and mass,
but what about charge???
if we perceive reality in terms of the Planck Units and the electronic
charge, e, then a change in the Fine-Structure Constant, alpha, would be
noticed as a change in the Coulomb Force constant since
F = c^2*u0/(4*pi*alpha) * q^2/r^2
where q is in units of e.
and we would also notice a change in the Characteristic Impedance of
radiation in a vacuum since
Z0 = 4*pi*Hbar*alpha/e^2 .
But, if OTOH, we perceived reality in terms of the Planck Units and the
charge Q = e/sqrt(alpha), then a change in alpha would be noticed as a
change in the charge of an electron.
so which is it? or does it make any difference?
i'll try to monitor this newsgroup but feel free to CC: me at
<rob...@wavemechanics.com> if you want to make sure i see a response.
--
r b-j
Wave Mechanics, Inc.
45 Kilburn St.
Burlington VT 05401-4750
tel: 802/951-9700 ext. 207 http://www.wavemechanics.com/
fax: 802/951-9799 rob...@wavemechanics.com
--
comp.dsp people have a guru who is not silly. It is fine that you could
re-discover Planck units. By the way, in Planck units - where you put
epsilon0=mu0=1, speaking in SI units -, the charge of the electron is
really equal to -sqrt(4.pi/137.036...) just like you say.
Physicists are however convenient and they usually express the charge in
units of (minus) the charge of the electron so that it is always integer
(or integer over three, for quarks). You cannot say that one choice of
units is objectively more fundamental; it is a matter of taste. Both are
certainly more fundamental than using Coulombs. However it depends on your
feelings. Anyway you can see that you cannot get rid of the number
1/137.036..., the fine structure constant. It is a dimensionless number
without any units. Therefore it does not depend on our choice of the
units. And there should be some explanation for its value!
We understand this number in terms of more fundamental constants of the
electroweak theory (g and g'), measured at higher energies (instead of
zero energies - as alpha), but a complete calculation yielding
1/137.036... is still missing. String theory is believed to be capable to
derive its value one day.
Particle physicists usually measure the charge so that the electron has
Q=-1. But then they must include the fine structure constant into the
definition of the energy. The energy density - or the Lagrangian (which is
something related that has the same dimension) - is defined as 1/g^2 times
E^2/2 etc. in the conventions where Q=-1 for the electron.
Either you say that the minimal charge is some strange number (instead of
1), or you can say that it is one but the energy density is not defined as
E^2/2 but this times a strange constant. You cannot get rid of the
constant at both places simultaneously. In fact, both conventions are used
by particle physicists sometimes. It causes a lot of confusion but there
are more difficult problems in the world. ;-)
> Now, finally, it seems that we must perceive reality in terms of the Planck
> Units (T, L, M) and perhaps the unit charge Q. if the Planck Length went
> from 10^-35 to 10^-32, then we would be 2000 meters tall instead of 2 but
It is correct that you invite us to perceive reality in Planck units, but
you do not do it yourself. If you did so, you would rather say: if one
meter was defined not as 10^35 Planck lengths but only 10^32 Planck
lengths, then we would be 2000 meters tall instead of 2 (anyway, two is
also too much) :-) - because everyone knows that a human being must be
about 2.10^35 Planck lengths tall in order to have the right number of
atoms.
> our meter stick would be 1000 meters and we would still call it a "meter"
> and the Planck Length would still be about 10^-35. same with time and mass,
> but what about charge???
And you would also realize that you can say the same sentence with the
charge, too. Your problem is that you omitted the units. The Planck length
is not 10^-35. The Planck length is 10^-35 meters. And a meter is a random
consequence of history; a practical unit in everyday life but a silly unit
without any depth; 10^35 Planck lengths, in a more fundamental language.
You can play the same game with Coulombs instead of meters and the result
is similar; you should remember that you are redefining meters and
Coulombs, not Planck length etc. Planck length is equal to one in natural
units and cannot be redefined.
However maybe you did not want to talk about Coulombs but the two
different conventions for the fundamental unit of charge. Well, if you
changed the number 1/137.036 (contained in the ratio of your two
"fundamental" units) to something else, the world would certainly change
dramatically! In fact, life would be killed if you changed the number by
less than one per cent.
The fine structure constant can be seen at many places. For example, the
(squared) speed of electrons in the hydrogen atom is roughly 1/137 of the
(squared) speed of light. As a consequence of this, the spectrum of the
hydrogen atoms have the famous lines with energies 1/n^2, but if you look
at the lines with a better resolution, you find out that they are
separated to several sublines; they form the so-called fine structure of
the Hydrogen spectrum. The distance between the main lines of the spectrum
is - if I simplify - 137 times bigger than the distance between the lines
in the fine structure; therefore the name. If 137 was replaced by 10, the
spectrum would look completely different, most known nuclei would decay
radioactively (because proton repel each other electromagnetically and
this force would be stronger than the extra "chromostatic" attraction
between quarks - in our world, the electromagnetism is weaker and the
attraction by gluons wins). Simply, it would be a different world. A
couple of dimensionful numbers can be changed without changing the world
(at most, their number can equal to the number of independent units); it
just corresponds to redefining your units. However you cannot change
dimensionless numbers. One is always one (for example, it is equal to its
square) and cannot be redefined to be three. On the contrary, there are
three definitions of quarks and leptons and you cannot redefine this
number to five. :-)
Best wishes
Lubos
______________________________________________________________________________
E-mail: lu...@matfyz.cz Web: http://www.matfyz.cz/lumo tel.+1-805/893-5025
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Superstring/M-theory is the language in which God wrote the world.
> Dear Robert,
>
> comp.dsp people have a guru who is not silly.
Cf:
http://groups.google.com/groups?threadm=3695AA17.2F32%40viconet.com
http://groups.google.com/groups?threadm=36A5935D.136E%40viconet.com
they were mad at me for saying that if the dependent variable of the
dirac-delta function must be dimensionally in reciprocal units of the
independent variable since the integral must be 1 (dimensionless).
> It is fine that you could
> re-discover Planck units. By the way, in Planck units - where you put
> epsilon0=mu0=1, speaking in SI units -, the charge of the electron is
> really equal to -sqrt(4.pi/137.036...) just like you say.
actually, i would put mu0 = 4*pi and epsilon0 = 1/(4*pi) so that the
simple Coulomb force equation has a constant = 1. same for the
gravitational force equation.
> Physicists are however convenient and they usually express the charge in
> units of (minus) the charge of the electron so that it is always integer
> (or integer over three, for quarks).
which is one reason (caveat: i don't really know diddley about quarks)
because of this e/3 charge thing that i didn't like about setting the
unit charge to be e.
> You cannot say that one choice of
> units is objectively more fundamental; it is a matter of taste. Both are
> certainly more fundamental than using Coulombs. However it depends on your
> feelings. Anyway you can see that you cannot get rid of the number
> 1/137.036..., the fine structure constant.
of course not. it's just a matter about where one wishes to see it.
> It is a dimensionless number
> without any units. Therefore it does not depend on our choice of the
> units. And there should be some explanation for its value!
we would wish so, but i hadn't really had any hope about it (how about
1/(exp(0.5*pi^2) - 2) ? - off by 70 ppm). anyway, my curiousity comes
from (being a layman) just hearing that alpha has been measured to
have changed by about 10 ppm (out of 4 ppbillion uncertainty) in the
12 billion years it took for the big-bang background radiation to
befall us. and i'm wondering (if there were a much larger change in
alpha) how that would be noticed. as a change in e? or a change in
epsilon0 and z0? or something else? or is it moot?
> We understand this number in terms of more fundamental constants of the
> electroweak theory (g and g'), measured at higher energies (instead of
> zero energies - as alpha), but a complete calculation yielding
> 1/137.036... is still missing. String theory is believed to be capable to
> derive its value one day.
that'll be interesting.
<snippage of which i understood maybe 1/2)
> > Now, finally, it seems that we must perceive reality in terms of the Planck
> > Units (T, L, M) and perhaps the unit charge Q. if the Planck Length went
> > from 10^-35 to 10^-32, then we would be 2000 meters tall instead of 2 but
>
> It is correct that you invite us to perceive reality in Planck units,
i meant it more as an (naive) observation rather than an invitation.
> but
> you do not do it yourself. If you did so, you would rather say: if one
> meter was defined not as 10^35 Planck lengths but only 10^32 Planck
> lengths, then we would be 2000 meters tall instead of 2 (anyway, two is
> also too much) :-) - because everyone knows that a human being must be
> about 2.10^35 Planck lengths tall in order to have the right number of
> atoms.
well, i tried to say it as such. however our height not only depends
on the number of atoms but their size and the Rydberg constant (or
more precisely, its reciprocal), which depends on e, seems to have
something to say about that.
i would normally think of it as this: we perceive reality (for me it's
just 3D space and time) in terms of, or relative to, the speed of
light, the gravitational constant, Planck's constant, and perhaps the
charge of the electron. so, it seems to me that we cannot really
perceive a change in the speed of light because our sense of length
and time is relative to that. that's why i've always thought that
those "thought experiments" asking "what if the speed of light was 30
miles per hour? what would life be like?" are similar to asking how
many angels dance on the head of a pin.
anyway, if our perception of reality *is* in terms of c, G, and hBar,
then our perception of length, time, and mass must be in terms of the
Planck Units which is a natural reason to use them for theoretical
thinking.
for my "taste", the charge of an electron becomes more secondary being
that it is more of an "object" in the universe and not a parameter of
the universe itself. it seems more logical or "natural" to first
observe the nature of forces of the universe on objects in general,
select appropriate units that would normalize the constants of
proportionality (of the simplest, most basic equations) to one, and
then secondly start looking at some objects (such as atoms and
sub-atomic particle). we sorta do that with Newton's 2nd law: we
don't say that Force is proportional to mass times acceleration
(although it is for |v| << c), we choose our unit of force so that
force *is* mass times acceleration. i would do this for charge also
so that:
E = m (not m * c^2 since we're normalizing c = 1)
E = omega (not hBar*omega for the same reason)
F = m1*m2 / r^2 (not G*m1*m2 / r^2)
and
F = q1*q2 / r^2 (not k*q1*q2 / r^2)
to satisfy the first three, you need to measure length, time, and mass
in units of Planck. to satisfy the fourth in addition, you need to
measure charge in units of e/sqrt(alpha), not e.
> > our meter stick would be 1000 meters and we would still call it a "meter"
> > and the Planck Length would still be about 10^-35. same with time and mass,
> > but what about charge???
>
> And you would also realize that you can say the same sentence with the
> charge, too. Your problem is that you omitted the units. The Planck length
> is not 10^-35. The Planck length is 10^-35 meters. And a meter is a random
> consequence of history; a practical unit in everyday life but a silly unit
> without any depth; 10^35 Planck lengths, in a more fundamental language.
> You can play the same game with Coulombs instead of meters and the result
> is similar;
yes. and the unit charge is not e but would be e/sqrt(alpha),
correct?
> you should remember that you are redefining meters and
> Coulombs, not Planck length etc. Planck length is equal to one in natural
> units and cannot be redefined.
agreed! it just seems to me that it is not consistent to call the
"Planck charge" (i dunno if the term is really used in your biz) e.
it seems much more consistent to me to call the Planck charge such a
charge that (this is hypothetically since the distances are wildly
small, even for electrons) when two such charges are placed one Planck
length apart, you get one Planck unit of force.
you do that definition first, *then* you do some kind of Miliken
experiment and observe that the charge of an electron appears to be
sqrt(alpha) times your unit charge.
> However maybe you did not want to talk about Coulombs but the two
> different conventions for the fundamental unit of charge. Well, if you
> changed the number 1/137.036 (contained in the ratio of your two
> "fundamental" units) to something else, the world would certainly change
> dramatically! In fact, life would be killed if you changed the number by
> less than one per cent.
well, given the present trend, we have about 12 trillion years left
before life is killed off due to alpha getting "out of bounds".
> The fine structure constant can be seen at many places. For example, the
> (squared) speed of electrons in the hydrogen atom is roughly 1/137 of the
> (squared) speed of light. As a consequence of this, the spectrum of the
> hydrogen atoms have the famous lines with energies 1/n^2, but if you look
> at the lines with a better resolution, you find out that they are
> separated to several sublines; they form the so-called fine structure of
> the Hydrogen spectrum. The distance between the main lines of the spectrum
> is - if I simplify - 137 times bigger than the distance between the lines
> in the fine structure; therefore the name. If 137 was replaced by 10, the
> spectrum would look completely different, most known nuclei would decay
> radioactively (because proton repel each other electromagnetically and
> this force would be stronger than the extra "chromostatic" attraction
> between quarks - in our world, the electromagnetism is weaker and the
> attraction by gluons wins). Simply, it would be a different world.
that i understand. how much different would the world be if alpha
quickly changed by another 10 ppm? BTW, which way did it change in
the last 12 billion years? did it increase or decrease by 10 ppm?
> A
> couple of dimensionful numbers can be changed without changing the world
> (at most, their number can equal to the number of independent units); it
> just corresponds to redefining your units. However you cannot change
> dimensionless numbers.
*that* i understand! (at least that you cannot change dimensionless
numbers by very much without adverse consequences.)
thanks for the response, Lubos.
...
> Superstring/M-theory is the language in which God wrote the world.
some might say that instead "Superstring/M-theory is a language
construct of humankind to try to verbalize and understand what God
was/is doing when He wrote the world." kinda like reading
hieroglyphics. you never know, maybe in 200 years, they'll toss it on
the trash heap with Newton's Laws.
:-/
r b-j
> they were mad at me for saying that if the dependent variable of the
> dirac-delta function must be dimensionally in reciprocal units of the
> independent variable since the integral must be 1 (dimensionless).
And you were right. And if I have come here 3 years ago, there would be
two of us attacked by the people who don't know what is the dimension of
the delta function.
Delta(momentum) has units of 1/momentum because it can be written as the
derivative d stepfunction(momentum) / d momentum. Here, the
stepfunction is 0 or 1, so it is dimensionless, and therefore the only
dimension comes from the momentum in the denominator. Or just like you
say, the integral must be one. Delta is a distribution and this kind of
"function" has always the dimension of 1 / thedimension of its parameter.
> actually, i would put mu0 = 4*pi and epsilon0 = 1/(4*pi) so that the
> simple Coulomb force equation has a constant = 1. same for the
> gravitational force equation.
Right, I prefer to put epsilon0=1 as SI suggests but this difference is a
psychological one. Your convention with epsilon=1/4.pi corresponds to the
Gaussian units (CGS, centimeter-gram-second) in fact.
> which is one reason (caveat: i don't really know diddley about quarks)
> because of this e/3 charge thing that i didn't like about setting the
> unit charge to be e.
Quarks were known much later than the charge of the electron was called
"-e". ;-) Some string theory models admit even more exotic fractions such
as e/11 etc. (see the Chapter 9 of The Elegant Universe) but "e" is the
minimal unit of something that can exist freely and does not require too
huge energies. There would be a lot of mess if we suddenly decided that
the sign "e" should be replaced by "3e" in all the textbooks.
> of course not. it's just a matter about where one wishes to see it.
Exactly.
> we would wish so, but i hadn't really had any hope about it (how about
> 1/(exp(0.5*pi^2) - 2) ? - off by 70 ppm). anyway, my curiousity comes
Great formula. Much better than other people suggested even in their
papers submitted to xxx.lanl.gov. Unfortunately, your formula is most
likely wrong. :-)
> from (being a layman) just hearing that alpha has been measured to
> have changed by about 10 ppm (out of 4 ppbillion uncertainty) in the
> 12 billion years it took for the big-bang background radiation to
> befall us. and i'm wondering (if there were a much larger change in
> alpha) how that would be noticed. as a change in e? or a change in
> epsilon0 and z0? or something else? or is it moot?
Physically, you cannot note the change of the value of letter unless you
precisely define what they mean. Physically you can however measure
frequencies of the spectral lines of Hydrogen (the rainbow coming from the
Hydrogen contains some "lines", discrete strips of color). And the
distance between two lines in the fine structure is say 137 times smaller
than the big distance between two specific lines. So this ratio would
change. There would be very many things that would change. If it was more
than 1 ppm or so, you would certainly notice.
> well, i tried to say it as such. however our height not only depends
> on the number of atoms but their size and the Rydberg constant (or
> more precisely, its reciprocal), which depends on e, seems to have
> something to say about that.
It depends what constants you want to start with. If you start with the
radius of the atom, maybe you can use biological arguments to say that
beings as smart as we are :-) should better be 10 billion atoms tall.
Therefore a natural unit that they want to choose to measure things will
be 10 billion Angstroms i.e. 1 meter. And because of some relation between
the size of the atom and Planck length, you can also say that we are led
to use units 10^35 Planck lengths (we called this one "one meter",
approximately).
> charge of the electron. so, it seems to me that we cannot really
> perceive a change in the speed of light because our sense of length
> and time is relative to that.
Exactly! This is the correct viewpoint that I was just explaining to
someone else. In fact, the SI units directly reflect this approach. 1
meter is currently defined as 1/299 792 458 light seconds. So if you keep
your definition, the Universe can change in any way but the speed of light
is always fixed. A change of the speed of light is just a change of our
definitions and it is useful to keep it fixed because it implies a
relation between space and time which is so important, because of
relativity.
> that's why i've always thought that
> those "thought experiments" asking "what if the speed of light was 30
> miles per hour? what would life be like?" are similar to asking how
> many angels dance on the head of a pin.
Yes, this question physically means just "how it would look like if we
could moved by speeds c/5 or so".
> anyway, if our perception of reality *is* in terms of c, G, and hBar,
> then our perception of length, time, and mass must be in terms of the
> Planck Units which is a natural reason to use them for theoretical
> thinking.
Exactly. However a necessary amount of blood for a hospital is at least a
gallon and therefore we don't use Planck volumes to measure volume, for
instance. Maybe we will use them sometimes...
> E = m (not m * c^2 since we're normalizing c = 1)
> E = omega (not hBar*omega for the same reason)
Right.
> F = m1*m2 / r^2 (not G*m1*m2 / r^2)
> F = q1*q2 / r^2 (not k*q1*q2 / r^2)
Here I would put the usual 4.pi into the denominator as in SI units. The
reason is that 4.pi.r^2 is the surface of sphere - and the electric field
is kind of uniformly distributed over the sphere around your charge. If
you accept my SI conventions with 4.pi, the Maxwell equations (which are
more fundamental, I think) do not have any 4.pi's in them - with your
convention you need to put some 4.pi's into Maxwell's equations. In the
previous case of gravity, we should do the same (with those 4.pi), but in
fact we still use Newton's convention for his constant. A better
denominator could be perhaps 8.pi here. Sometimes a "gravitational
constant" differs from the "Newton's constant" by a factor of 8.pi. All of
this is just a convention.
> yes. and the unit charge is not e but would be e/sqrt(alpha),
> correct?
Correct - up to this convention of 4.pi - I would probably prefer to say
that the unit charge is e/sqrt(4.pi.alpha). Today we say that your
conventions with 4.pi are "not rationalized". :-)
> you do that definition first, *then* you do some kind of Miliken
> experiment and observe that the charge of an electron appears to be
> sqrt(alpha) times your unit charge.
Right, I would say that this is precisely how Millikan did it except for
the various powers of hbar and c that he used everywhere around (and two
of us set them equal to one). His result was something like sqrt(alpha)
times some powers of hbar and c, even without those 4.pi because he was
using your, old conventions.
> well, given the present trend, we have about 12 trillion years left
> before life is killed off due to alpha getting "out of bounds".
Well, and you did not know that we had exactly 1 day before something like
the World War III starts.
I hope that all of you and your families and friends are doing fine and
that the attacks on Tuesday (the day when I defended my thesis) will make
us stronger, not weaker.
God bless you
Lubos
P.S. I am not sure whether the experiments "showing" changing value of
alpha are reliable enough (they contradict some estimates derived from
successes of our theory of the primordial nucleosynthesis) and I don't
know which direction it goes. Sorry that I did not reply to everything.
______________________________________________________________________________
E-mail: lu...@matfyz.cz Web: http://www.matfyz.cz/lumo tel.+1-805/893-5025
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> Right, I prefer to put epsilon0=1 as SI suggests but this difference is a
> psychological one. Your convention with epsilon=1/4.pi corresponds to the
> Gaussian units (CGS, centimeter-gram-second) in fact.
With rationalised units it is a matter of convention
whether or not to take the 4pi as part of the equation,
or to incorporate it in epsilon_0 or mu_0.
MKSA chooses the second option.
In Heaviside-Lorentz or rationalized natural units
it seems best to write Coulomb's and Ampere's law
with an explicit 4pi in it
-and- say that you have units with epsilon_0 = 1,
if you would be crazy enough to worry about
what epsilon_0 should be in such a unit system.
To be discouraged, IMHO:
Saying that eps_0 equals 4pi in such unit systems.
The eps_0 and mu_0 are artefacts of the MKSA system,
without any physical meaning or interpretation,
and nobody would even think about introducing them
if a more sensible unit system had been chosen, long ago,
without them.
But indeed, conventions only,
Jan
> On 10 Sep 2001, 1 day before the attacks, robert bristow-johnson wrote:
> > they were mad at me for saying that if the dependent variable of the
> > dirac-delta function must be dimensionally in reciprocal units of the
> > independent variable since the integral must be 1 (dimensionless).
>
> And you were right. And if I have come here 3 years ago, there would be
> two of us attacked by the people who don't know what is the dimension of
> the delta function.
The simplest way to see that is to note that delta(x) must be
homogeneous of degree -1 under scale transformations:
delta(ax) = 1/a delta(x),
since integrals involving a delta function should be scale invariant.
Best,
Jan
> On 10 Sep 2001, 1 day before the attacks, robert bristow-johnson wrote:
> > we would wish so, but i hadn't really had any hope about it (how about
> > 1/(exp(0.5*pi^2) - 2) ? - off by 70 ppm). anyway, my curiousity comes
>
> Great formula. Much better than other people suggested even in their
> papers submitted to xxx.lanl.gov. Unfortunately, your formula is most
> likely wrong. :-)
Hello Lubos, and congratulations to your Ph.D.
Robert's formula for the inverse fine structure constant gives
exp(0.5*pi^2) - 2 = 137.0456367
The best formula I have seen on the Arxiv is
4 pi^3 + pi^2 + pi = 137.0363038
The author claimed that the three terms should have something to do with
the groups SU(3), SU(2) and U(1), but I didn't really understand how.
The best experimental data that I found (from a 20 year old PPDB) is
137.03604(11)
>actually, i would put mu0 = 4*pi and epsilon0 = 1/(4*pi) so that the
>simple Coulomb force equation has a constant = 1. same for the
>gravitational force equation.
This convention is called "unrationalised";
I (and apparently Lubos) prefer "rationalised".
The reason is that I find the Maxwell equations
more fundamental than the Coulomb equation.
The debate between rationalised and unrationalised never ends.
I am even more radical than most proponents of rationalisation
in that I also rationalise the constant of gravitation.
Since 8 pi G, rather than G itself, appears in
the Einstein equations of general relativity,
I like to set 8 pi G to 1 rather than G,
which makes my Planck units off from others'
by a factor of about 5.
>we sorta do that with Newton's 2nd law: we
>don't say that Force is proportional to mass times acceleration
>(although it is for |v| << c), we choose our unit of force so that
>force *is* mass times acceleration.
Exactly. This example is a good one to use
when trying to explain Planck units to people.
If you want a good *historical* example,
use our modern measurement of heat with energy units.
Once upon a time, people measured heat in different units,
so we had an extra fundamental constant of nature, 4.184 J/cal.
This reminds me that there is another quantity
that can be measured in energy units but usually isn't:
temperature. The fundamental constant of nature here
is Boltzmann's constant k_B, about 1.381e(-23) J/K.
If you set Boltzmann's constant to 1,
then entropy becomes dimensionless
and you can see that it can measure
the dimensionless quantity of information.
In fact, let's be as systematic as possible about this.
In the International System of units (SI),
there are 7 allegedly physical units,
so we need to set 7 fundamental constants to 1
in order to make everything dimensionless.
Actually, 1 unit, the candela, is not a physical unit at all
but a physiological unit for apparent brightness to a human eye.
So the 6 physical units are the metre, the second,
the kilogramme, the ampere, the kelvin, and the mole.
The 6 constants are c, hbar, G (I prefer 8 pi G),
epsilon_0 (you prefer 4 pi epsilon_0), k_B, and N_A.
(N_A is Avogadro's number, about 6.022e23/mol.
Setting it to 1 just says that a mole is about 6.022e23,
which comes as naturally as saying that a dozen is 12.)
Now everything is unitless.
>>Superstring/M-theory is the language in which God wrote the world.
>some might say that instead "Superstring/M-theory is a language
>construct of humankind to try to verbalize and understand what God
>was/is doing when He wrote the world." kinda like reading
>hieroglyphics. you never know, maybe in 200 years, they'll toss it on
>the trash heap with Newton's Laws.
Nope. String theory is the one final theory of physics that explains
every possible phenomenon -- or at least will once we've finished it.
If we ever find something that contradicts string theory,
then that will only prove that what we've found does not exist.
^_^ ^_^ ^_^
(This is just me teasing Lubos; you can ignore it, robert.)
-- Toby
to...@math.ucr.edu
>The best experimental data that I found (from a 20 year old PPDB) is
>137.03604(11)
20 years old? I can beat that!
137.0359895+-61
This is from Brookhaven Natl Lab's Nuclear Wallet Cards, 1995 Jul.
But really it's not that good, since they reveal in the fine print
that their table of fundamental constants is simply swiped from
the 1986 Nov CODATA Bulletin, published 9 long years earlier.
So surely somebody else here can do better!!!
-- Toby
to...@math.ucr.edu
> This convention is called "unrationalised";
> I (and apparently Lubos) prefer "rationalised".
> The reason is that I find the Maxwell equations
> more fundamental than the Coulomb equation.
> The debate between rationalised and unrationalised never ends.
>
> I am even more radical than most proponents of rationalisation
> in that I also rationalise the constant of gravitation.
> Since 8 pi G, rather than G itself, appears in
> the Einstein equations of general relativity,
> I like to set 8 pi G to 1 rather than G,
> which makes my Planck units off from others'
> by a factor of about 5.
No need to do that: Planck units are defined up to a proportionality
constant anyway, (it is only dimensional analysis)
so you can mess up the Einstein equation
without changing the Planck units.
There is no end to the confusion you can produce,
once you start meddling.
(I asked John, sometime ago, on precisely this point,
whether he wanted to mess up Planck also, or only Einstein.
No answer, if I remember correctly)
My preference: don't change the established -numerical- values
of the Planck length/time/mass/etc/, change the definitions,
if you feel you must change at all.
Best,
Jan
>Toby Bartels wrote:
>>I like to set 8 pi G to 1 rather than G,
>>which makes my Planck units off from others'
>>by a factor of about 5.
>My preference: don't change the established -numerical- values
>of the Planck length/time/mass/etc/, change the definitions,
>if you feel you must change at all.
Fair enough. Say this:
<<I like to set 8 pi G to 1 rather than G,
<<which makes my natural units off from the Planck units
<<by a factor of about 5.
In real life, I wouldn't confuse anybody
by introducing my natural units as "Planck units"
unless we were only talking about order of magnitude.
-- Toby
to...@math.ucr.edu
>The eps_0 and mu_0 are artefacts of the MKSA system,
>without any physical meaning or interpretation,
>and nobody would even think about introducing them
>if a more sensible unit system had been chosen, long ago,
>without them.
Well, I agree that these things can be set to "1" if you
choose your units of charges (and fields) appropriately. But
I don't agree that there is no physical meaning or
interpretation to them!
For instance, eps_0 is more than a pure constant. It's the
relationship between D and E in Maxwell's equations, in free
space. A convention may be chosen so that D = E in free
space, but D and E are still very different entities!
The difference between them is that E is used to find the
force on a charged nonmoving test particle, while D is used
to integrate over a region's boundary to find the total charge
contained within that region (Gauss' law). In other words,
E measures the effect of the field on charged particles, while D
measures the effect of charged particles (as sources of the field).
E tells what charges will do, while D tells where the charges are!
One might argue that these should equal each other by some sort
of action=reaction Newtonian argument (conservation of momentum).
But the fact remains that geometrically, they represent different
entities! This is easiest to understand if one represents E and D
by differential forms. (Eric Forgy just got extremely interested,
didn't he? :-) )
As a differential form in 3-space, E is a one-form. The physical
interpretation is that it represents the differential contribution
to a charged particle's energy (per unit charge) when the particle
moves across the surfaces of the one-form. (For electrostatic
configurations, this one-form field is integrable, so that you
may define E = - grad Phi everywhere.) When the one-form E is
integrated over a line in 3-space, the result is the change in
energy (per unit charge) on a charged particle which has moved
along that line.
As a differential form in 3-space, D is a two-form! The
interpretation is that the integral of D over a closed region
equals the charge contained in that region, which is Gauss' law.
So, as forms, D and E are related by the Hodge star operator, even
when the units of the vectors #(*D) and #E are chosen to be equal to
one another. (Notation: #(one-form) is the vector obtained by
applying the inverse metric tensor to the one-form, and * is the
Hodge star.)
Similar remarks apply when D and E are forms in (3+1)-D spacetime:
they are both 2-forms, but E is a space-time 2-form, while D is a
space-space 2-form.
> In article <1ezuonc.1cv...@de-ster.demon.nl>,
> J. J. Lodder <j...@de-ster.demon.nl> wrote:
>
> >The eps_0 and mu_0 are artefacts of the MKSA system,
> >without any physical meaning or interpretation,
> >and nobody would even think about introducing them
> >if a more sensible unit system had been chosen, long ago,
> >without them.
>
> Well, I agree that these things can be set to "1" if you
> choose your units of charges (and fields) appropriately. But
> I don't agree that there is no physical meaning or
> interpretation to them!
>
> For instance, eps_0 is more than a pure constant. It's the
> relationship between D and E in Maxwell's equations, in free
> space. A convention may be chosen so that D = E in free
> space, but D and E are still very different entities!
What -physical- experiment would you propose to demonstrate
a physical (as opposed to conceptual) difference between E and D,
in vacuum?
> The difference between them is that E is used to find the
> force on a charged nonmoving test particle, while D is used
> to integrate over a region's boundary to find the total charge
> contained within that region (Gauss' law). In other words,
> E measures the effect of the field on charged particles, while D
> measures the effect of charged particles (as sources of the field).
> E tells what charges will do, while D tells where the charges are!
Indeed, this is a -conceptual- distinction only:
in vacuum there is only one field,
which you may -call- either E or D.
> One might argue that these should equal each other by some sort
> of action=reaction Newtonian argument (conservation of momentum).
> But the fact remains that geometrically, they represent different
> entities! This is easiest to understand if one represents E and D
> by differential forms. (Eric Forgy just got extremely interested,
> didn't he? :-) )
I would not argue anything of the kind.
Instead I would say that there is only one field E,
and that only the Maxwell egns in vacuum are fundamental.
The Maxwell eqns in matter, with D in them, are approximate eqns,
to be derived from the fundamental eqns in vacuum
by appropriate statistical mechanics.
snip forms, not that they aren't nice, but nice formalism
cannot substitute for physical desciption.
Best,
Jan
> 20 years old? I can beat that!
>
> 137.0359895+-61
>
> This is from Brookhaven Natl Lab's Nuclear Wallet Cards, 1995 Jul.
>
> But really it's not that good, since they reveal in the fine print
> that their table of fundamental constants is simply swiped from
> the 1986 Nov CODATA Bulletin, published 9 long years earlier.
>
> So surely somebody else here can do better!!!
There is an astronomy paper [1] which shows that the fine
structure constant used to be about one part in 10^5 less than it is
now! Hopefully, some team
can verify or refute this evidence because this surprising result has
importance for e.g. variable-
speed-of-light (VSL) cosmology.
[1] http://arxiv.org/abs/astro-ph/0012539
------------
>Instead I would say that there is only one field E,
>and that only the Maxwell egns in vacuum are fundamental.
>The Maxwell eqns in matter, with D in them, are approximate eqns,
>to be derived from the fundamental eqns in vacuum
>by appropriate statistical mechanics.
Well, I would argue that the vacuum equations aren't fundamental either.
They are merely a classical approximation to QED.
And even that is merely an approximation to the GWS electroweak theory.
And even that is merely an approximation;
even if there is no grand unification of electroweak and strong forces,
still the appearance of the metric tensor in the GWS theory
must be modified by a theory of quantum gravity.
The value of a physical theory isn't determined by its fundamentalness.
The Maxwell equations in matter, where E and D are different,
are quite useful and accurate across a broad range of phenomena.
You need a relationship between E and D given by the type of matter;
for many types, this can be given by a single constant epsilon.
The value of epsilon in vacuum is epsilon_0 = 1, in appropriate units.
Good, it should be!
-- Toby
to...@math.ucr.edu
Well, from what I wrote below:
>> The difference between them is that E is used to find the
>> force on a charged nonmoving test particle, while D is used
>> to integrate over a region's boundary to find the total charge
>> contained within that region (Gauss' law). In other words,
>> E measures the effect of the field on charged particles, while D
>> measures the effect of charged particles (as sources of the field).
>> E tells what charges will do, while D tells where the charges are!
...you can imagine the following experiment: measure the force on
small charged particles (whose charge is as small as possible),
and divide by the charge of the particle. This gives you the
electric field E at a point. Continue this over the entire (2-D)
boundary of a (3-D) volume, and you have defined E everywhere on
the boundary.
Now, take small coiled loops of paramagnetic material, and measure
(somehow!) the total induction H integrated along the loops as you
rotate the loops (at constant angular velocity) around at the
same points where you measured E. This gives you the flux of
D through the loops. Repeat this over the same boundary that
was done for the E field, and you will have the flux of D through
the boundary.
Gauss' Law now says that this total flux of D equals the charge
contained within the volume (whose boundary was the region D and
E were measured over).
You may choose a unit system in which D = E in vacuum in Euclidean
space. Suppose that we do this. If the experiment above has
been performed in Euclidean space, then the total flux of E through
the boundary will also give the charge contained in the volume.
But in a curved space, the flux of E though the surface will *not*
generally give the charge enclosed, unless it happens to be 0.
So, we can conclude that E cannot equal D at every point on that
surface.
You may instead try to adjust the *electric charge* so that the "E-charge"
(giving the force on a particle) is different from the "D-charge" (to be
used in Gauss' Law) in curved spaces, but D and E are defined to be
the same things. However, to be fair, you should also start having
the "D-charge" change in dielectric media too, if that is the route
taken.
It's nice to see people pointing out that E and D are two different
things :) I saw a comment in some other thread where it was written E
= D, and I was tempted to pipe in, but now I can't resist :)
I know you know this, and what you said was precisely that E is a
1-form and D is a 2-form. The relation between them involves the
space(time) metric. I personally think it is misguided (and
misleading) to write E = D ever! But I'm probably more passionate
about this than most people because I work in applied electromagnetics
:)
Eric
"Paul Arendt" <par...@black.nmt.edu> wrote:
> J. J. Lodder <j...@de-ster.demon.nl> wrote:
> >Paul Arendt <par...@black.nmt.edu> wrote:
[snip]
> J. J. Lodder wrote:
>
> >Instead I would say that there is only one field E,
> >and that only the Maxwell egns in vacuum are fundamental.
> >The Maxwell eqns in matter, with D in them, are approximate eqns,
> >to be derived from the fundamental eqns in vacuum
> >by appropriate statistical mechanics.
>
> Well, I would argue that the vacuum equations aren't fundamental either.
> They are merely a classical approximation to QED.
[snip more irrelevantia]
Sure, but why drag in these irrelevantia?
The physical meaning of eps_0 and mu_0, if any,
is an issue on the pre-1900 level.
It could, and should, have been settled then, once and for all,
by following Heaviside and Lorentz' proposals
for a sensible EM unit system.
If that had been done then you now would not even have known
that it is actually possible to introduce these notions,
unless you had happened to study history of science.
Best,
Jan
--
"The electrical intensity is given in square root psi" (Thomson)
>Toby Bartels wrote:
>>J. J. Lodder wrote:
>>>Instead I would say that there is only one field E,
>>>and that only the Maxwell egns in vacuum are fundamental.
>>>The Maxwell eqns in matter, with D in them, are approximate eqns,
>>>to be derived from the fundamental eqns in vacuum
>>>by appropriate statistical mechanics.
>>Well, I would argue that the vacuum equations aren't fundamental either.
>>They are merely a classical approximation to QED.
>Sure, but why drag in these irrelevantia?
>The physical meaning of eps_0 and mu_0, if any,
>is an issue on the pre-1900 level.
In the sense that this physical meaning could be understood before 1900, yes.
But the physical meaning remains, albeit a very simple meaning.
>It could, and should, have been settled then, once and for all,
>by following Heaviside and Lorentz' proposals
>for a sensible EM unit system.
I quite agree.
>If that had been done then you now would not even have known
>that it is actually possible to introduce these notions,
>unless you had happened to study history of science.
But now I disagree. Using Heaviside Lorentz units,
I would still have studied Maxwell's equations for dielectric media
and been introduced to the concept of epsilon and mu.
(These quantities would be dimensionless, of course.)
Then I would learn that epsilon and mu for the vacuum are both exactly 1.
How nice!
That the dielectric constant of the vacuum is 1
has as much physical meaning as that the speed of light there is 1.
The speed of light in vacuum may be a very trivial quantity in good units,
but it retains its physical meaning -- that is the speed that light travels.
-- Toby
to...@math.ucr.edu
> It's nice to see people pointing out that E and D are two different
> things :) I saw a comment in some other thread where it was written E
> = D, and I was tempted to pipe in, but now I can't resist :)
>
> I know you know this, and what you said was precisely that E is a
> 1-form and D is a 2-form. The relation between them involves the
> space(time) metric. I personally think it is misguided (and
> misleading) to write E = D ever! But I'm probably more passionate
> about this than most people because I work in applied electromagnetics
Things which hold in one particular representation
of a theory may be helpful to some,
but they cannot have -physical- content.
They are different descriptions of the same thing.
Likewise, you would not claim that a particle
actually has two positions, x_\mu and x^\nu,
because a covariant vector is something entirely different
than a covariant one, mathematically speaking.
Best,
Jan
--
"Mathematicians are like Frenchmen:
They translate everything you say to them
immediately into their own language,
after which it is something entirely different" (Goethe)
> In article <1f0p07l.poz...@de-ster.demon.nl>,
> J. J. Lodder <j...@de-ster.demon.nl> wrote:
> >Paul Arendt <par...@black.nmt.edu> wrote:
> >> For instance, eps_0 is more than a pure constant. It's the
> >> relationship between D and E in Maxwell's equations, in free
> >> space. A convention may be chosen so that D = E in free
> >> space, but D and E are still very different entities!
> >What -physical- experiment would you propose to demonstrate
> >a physical (as opposed to conceptual) difference between E and D,
> >in vacuum?
> ...you can imagine the following experiment: measure the force on
> small charged particles (whose charge is as small as possible),
> and divide by the charge of the particle. This gives you the
> electric field E at a point. Continue this over the entire (2-D)
> boundary of a (3-D) volume, and you have defined E everywhere on
> the boundary.
Sure, gives you E(r) for all r, in principle.
And that is all there is to know,
in an electrostatic situation.
> Now, take small coiled loops of paramagnetic material, and measure
> (somehow!) the total induction H integrated along the loops as you
> rotate the loops (at constant angular velocity) around at the
> same points where you measured E. This gives you the flux of
> D through the loops. Repeat this over the same boundary that
> was done for the E field, and you will have the flux of D through
> the boundary.
No need to introduce paramagnetic matter: a flip coil will do.
And: this measurement will not tell you anything new:
The results of any further experiments can be predicted
from E(r) measured above.
> Gauss' Law now says that this total flux of D equals the charge
> contained within the volume (whose boundary was the region D and
> E were measured over).
>
> You may choose a unit system in which D = E in vacuum in Euclidean
> space. Suppose that we do this. If the experiment above has
> been performed in Euclidean space, then the total flux of E through
> the boundary will also give the charge contained in the volume.
>
> But in a curved space, the flux of E though the surface will *not*
> generally give the charge enclosed, unless it happens to be 0.
> So, we can conclude that E cannot equal D at every point on that
> surface.
Let's not drag curved spaces into this discussion.
The emptiness of the argument can also be seen in Euclidean space,
by using a coordinate system with metric tensor not the identity.
Indeed, there are two ways then to calculate the charge in a given
volume: a correct and an incorrect one.
> You may instead try to adjust the *electric charge* so that the "E-charge"
> (giving the force on a particle) is different from the "D-charge" (to be
> used in Gauss' Law) in curved spaces, but D and E are defined to be
> the same things. However, to be fair, you should also start having
> the "D-charge" change in dielectric media too, if that is the route
> taken.
The 'cure', (two charges) would be worse
than the (non-existent :-) disease,
in my opinion.
Jan
"J. J. Lodder" <nos...@de-ster.demon.nl> wrote:
> Eric Alan Forgy <fo...@uiuc.edu> wrote:
> >
> > I know you know this, and what you said was precisely that E is a
> > 1-form and D is a 2-form. The relation between them involves the
> > space(time) metric. I personally think it is misguided (and
> > misleading) to write E = D ever! But I'm probably more passionate
> > about this than most people because I work in applied electromagnetics
>
> Things which hold in one particular representation
> of a theory may be helpful to some,
> but they cannot have -physical- content.
> They are different descriptions of the same thing.
I agree 100%
> Likewise, you would not claim that a particle
> actually has two positions, x_\mu and x^\nu,
> because a covariant vector is something entirely different
> than a covariant one, mathematically speaking.
I agree 100%
Hmm... if I agree 100% with what you said, then why does it seem like
you were disagreeing with what I said? :) If you are really
disagreeing with what I said, would you mind spelling out a bit more
clearly in what way you disagree? I'd like to know. I'm supposed to be
an expert in EM, so if I am missing something basic, I'd like to know
about it :)
Maybe I'll explain what I mean more precisely so that if there is any
hole in my logic, it will be easier to spot. I'd say that given E, a
metric, and some information about the material properties, you can
find D. Conversely, given D, a metric, and some information about the
material properties, you can find E. So, essentially, given a metric
and some information about the material properies, then perhaps, in
some sense of the word, you can say E and D are the "same".
But then what about this scenario? Say we have a time-harmonic system
so that
d/dt -> i*w,
and we know J and H. Then we can find D simply by
D = [curl H - J]/(i*w).
With this D, a metric, and information about the material properties,
we can then find E. Conversely, if we know E, then we can similarly
find B by
B = [-curl E]/(i*w).
With this B, a metric, and information about the material properties,
we can then find H. So, under the prescribed scenario, given E we can
find H, and given H we can find E. Would you then say that E and H are
the "same"?
One of the reasons I am such a stickler about saying E and D are not
the same comes from experience with numerical solutions to Maxwell's
equations. E, being a 1-form, is naturally associated to the edges of
some mesh. D, being a 2-form, is naturally associated to the faces of
some mesh. To me, saying E and D are the same is like saying edges are
the same as faces :) In 3d, there is a nice way to associate edges and
faces. That is by constructing a dual mesh. For instance, if the mesh
is a simplicial complex, then you can construct the dual mesh in many
ways, e.g. a Poincare dual or a barycentric dual. Then, for every
p-simplex of the primary mesh, you have an (n-p)-cell of the dual
mesh, which is not simplicial. Still, I'd hesitate to say E and D were
the same because that would be like saying an edge is the same as a
dual cell. Sure, they are related, but I wouldn't call them the same.
The last paragraph was based on lattice arguments, but those arguments
do manifest themselves when you go to the continuum limit (if you
desired to do so... I'm personally of the opinion you should do away
with the continuum model of space-time altogether, but that is a
different story). I think it is a subtle, yet important, distinction
between E and D, but I think it should be made. (By the way, those
arguments are of relevance for spin foam models as well.)
Is there really a disagreement here, or is it a semantic issue about
the meaning of the word "same"? If it is the latter, then there is no
need to argue over it. You say tomato, I say tomato... c'est la vie :)
Cheers,
Eric
[..]
> Sure, but why drag in these irrelevantia?
> The physical meaning of eps_0 and mu_0, if any,
> is an issue on the pre-1900 level.
As you may remember from previous threads, I disagree with this.
Because eps_0, together with e, h and c fixes the fine-structure
constant, any change in the fine structure constant (I think it
changes at very high energies, like just after the big bang)
will impact either eps_0, e, h or c.
I would choose eps_0 to change, leaving the others as
fundamental constants that can be set to unity. But you could
also change e. (Although that would be on the pre-2000 level)
Gerard
[..]
> One of the reasons I am such a stickler about saying E and D are not
> the same comes from experience with numerical solutions to Maxwell's
> equations. E, being a 1-form, is naturally associated to the edges of
> some mesh. D, being a 2-form, is naturally associated to the faces of
> some mesh.
Doe that mean that D transforms as a pseudo-vector?
E is the force on a charge, so it is a vector (Newton/Coulomb).
D is the displacement of (imaginary) charges through a surface.
(Coulomb/m2). It could be a pseudo vector.
Gerard
> J. J. Lodder wrote:
snip agree
> >If that had been done then you now would not even have known
> >that it is actually possible to introduce these notions,
> >unless you had happened to study history of science.
>
> But now I disagree. Using Heaviside Lorentz units,
> I would still have studied Maxwell's equations for dielectric media
> and been introduced to the concept of epsilon and mu.
> (These quantities would be dimensionless, of course.)
> Then I would learn that epsilon and mu for the vacuum are both exactly 1.
> How nice!
Indeed :-) But, being propery educated in this way
you would be less likely to make the mistake
of thinking of eps = 1
is a physical property of the vacuum.
> That the dielectric constant of the vacuum is 1
> has as much physical meaning as that the speed of light there is 1.
> The speed of light in vacuum may be a very trivial quantity in good units,
> but it retains its physical meaning -- that is the speed that light travels.
I guess this is the old confusion between
'fundamental speed in our universe' and 'speed of light' again.
The first can be taken to be 1,
and then it cannot be measured.
For light, it is of course necessary to establish
that it is actually massless,
which can in principle be done by verifying experimentally
that it travels one nanosecond in a nanosecond.
Best,
Jan
>Toby Bartels wrote:
>>But now I disagree. Using Heaviside Lorentz units,
>>I would still have studied Maxwell's equations for dielectric media
>>and been introduced to the concept of epsilon and mu.
>>(These quantities would be dimensionless, of course.)
>>Then I would learn that epsilon and mu for the vacuum are both exactly 1.
>>How nice!
>Indeed :-) But, being properly educated in this way
>you would be less likely to make the mistake
>of thinking of eps = 1 is a physical property of the vacuum.
Well, this *is* how I was educated, more or less --
I was originally taught in SI units, but I already knew how to
reduce the number of units by setting constants to 1,
so I immediately set eps_0 to 1 and thought about things in that way --
and I *do* think that eps = 1 is a physical property of the vacuum.
A property of a rather vacuous (ha! an unintentional pun!) sort,
but a property nonetheless.
Like saying cardinality = 0 is a mathematical property of the empty set.
Well, this analogy probably won't be so clear to people here,
but it really makes it click for me!
>>That the dielectric constant of the vacuum is 1
>>has as much physical meaning as that the speed of light there is 1.
>>The speed of light in vacuum may be a very trivial quantity in good units,
>>but it retains its physical meaning -- that is the speed that light travels.
>I guess this is the old confusion between
>'fundamental speed in our universe' and 'speed of light' again.
>The first can be taken to be 1,
>and then it cannot be measured.
When I say "speed of light", I mean the speed that light travels.
For the fundamental speed in the universe, I say "one".
>For light, it is of course necessary to establish
>that it is actually massless,
>which can in principle be done by verifying experimentally
>that it travels one nanosecond in a nanosecond.
Yes, and this is a physical fact.
Thus it is a physical property of the vacuum that
light travels there at the speed of 1,
just as it's a physical property of other materials that
light travels at certain speeds in them.
The speed of light in certain materials, as you know,
can be calculated as c = 1/sqrt(eps mu)[*].
Thus, c = 1/sqrt(1*1) = 1 in the vacuum.
[*]Or something like that.
-- Toby
to...@math.ucr.edu
>Because eps_0, together with e, h and c fixes the fine-structure
>constant, any change in the fine structure constant (I think it
>changes at very high energies, like just after the big bang)
>will impact either eps_0, e, h or c.
Yeah, specifically e ^_^.
>I would choose eps_0 to change, leaving the others as
>fundamental constants that can be set to unity. But you could
>also change e. (Although that would be on the pre-2000 level)
Should we reopen the discussion about which to change?
It seems patently obvious to me that you would change e,
having set eps_0 and c to 1, and h to 2 pi.
Planck, as we know, originally set e to 1 (and h to 1),
but Planck was not perfect.
-- Toby
to...@math.ucr.edu
>Eric Alan Forgy wrote:
Yeah, given a spacial metric, pseudovector and 2forms are equivalent.
Most people turn 2forms into vectors of course,
but this requires an orientation in addition to the metric.
I do find it more fundamental not even to assume the metric
and just to deal with the 2form itself -- in certain contexts.
-- Toby
to...@math.ucr.edu
"Gerard Westendorp" <wes...@xs4all.nl> wrote:
>
> Doe that mean that D transforms as a pseudo-vector?
> E is the force on a charge, so it is a vector (Newton/Coulomb).
> D is the displacement of (imaginary) charges through a surface.
> (Coulomb/m2). It could be a pseudo vector.
That is a really good question :) I am not an expert on pseudo vectors
because I usually think of them as artifacts of misinterpreting 2-forms (or
bivectors) as vectors. But if that is really true then it would be tempting
to think of D as a pseudo vector also, so there must be something else to
it. I don't think D is a pseudo vector because I have never heard of that
and I probably should have by now if it were (not a very scientific reason,
eh? :)). So let's try see why not (if not).
Let A be a 1-form in 4d space-time and let
F = dA.
This is a 2-form in space-time. As such, it is (4!)/(2!2!) = 6 dimensional
with three space-space dimensions and three space-time dimensions. That is
just big enough to accomodate 2 3d vectors. So, if you choose a reference
frame, which amount to choosing a time axis, you can decompose F into two
parts:
F = B + E/\dt.
(Note: This decomposition is quite arbitrary because dt is arbitrary.)
If you write out F in all its gory details it becomes:
F = (B_23 dx^23 + B_31 dx^31 + B_12 dx^12)
+ (E_1 dx^1 + E_2 dx^2 + E_3 dx^3)/\dt
where dx^ij = dx^i /\ dx^j. Under a parity transformation dx^i -> -dx^i, the
components of E change sign whereas the components of B do not change sign.
Thus, you can conclude that E is a vector and B is a pseudo vector. This
follows simply because B is a 2-form.
Now, the Hodge star # acts on the space-space basis elements as
#dx^23 = O(123)*dx^1/\dt
#dx^31 = O(123)*dx^2/\dt
#dx^12 = O(123)*dx^3/\dt
where O(123) is +/-1 and keeps track of the orientation. The Hodge star acts
on the space-time basis elements as
#(dx^1/\dt) = -O(123)*dx^23
#(dx^2/\dt) = -O(123)*dx^31
#(dx^3/\dt) = -O(123)*dx^12
Therefore
#F
= #B + #(E/\dt)
= O(123)*(B_23 dx^1 + B_31 dx^2 + B_12 dx^3)/\dt
-O(123)*(E_1 dx^23 + E_2 dx^31 + E_3 dx^12)
= H/\dt - D
= H_1 dx^1 + H_2 dx^2 + H_3 dx^3)/\dt
- (D_23 dx^23 + D_31 dx^31 + D_12 dx^12)
so that
H_1 = O(123)*B_23
H_2 = O(123)*B_31
H_3 = O(123)*B_12
and
D_23 = O(123)*E_1
D_31 = O(123)*E_2
D_12 = O(123)*E_3.
Ok, now the trick is that under the parity transformation
O(123) -> -O(123)
so you pick up one "-" for the basis elements of H, but then you pick up
ANOTHER "-" from the Hodge star. Then the overall sign of H is not changed
under the parity transformation so that H is a pseudo vector as well.
On the other hand, the sign of the basis elements of D do NOT change sign
under the parity transformation, but the components DO pick up a "-", so
overall, D picks up a "-" under the parity transformation. Hence D is a
vector as well (as it
should be based on my earlier unscientific reasoning :))
After all that mess, the short answer to the question "Is D a pseudo
vector?" is apparent. Although D is a 2-form, that does not mean that D is a
pseudo vector because D is the Hodge dual of a 1-form E and the HODGE STAR
PICKS UP AN ADDITIONAL SIGN UNDER A PARITY TRANSFORMATION (in this case, but
not in general).
Thanks for a nice question! It made me think. I hope my answer makes sense.
Eric
"Toby Bartels" <to...@math.ucr.edu> wrote:
> Gerard Westendorp wrote:
>
> >Doe that mean that D transforms as a pseudo-vector?
> >E is the force on a charge, so it is a vector (Newton/Coulomb).
> >D is the displacement of (imaginary) charges through a surface.
> >(Coulomb/m2). It could be a pseudo vector.
>
> Yeah, given a spacial metric, pseudovector and 2forms are equivalent.
> Most people turn 2forms into vectors of course,
> but this requires an orientation in addition to the metric.
> I do find it more fundamental not even to assume the metric
> and just to deal with the 2form itself -- in certain contexts.
Are you saying D DOES correspond to a pseudo vector? I just wrote a
long post declaring that D does NOT correspond to a pseudo vector, but
rather to a vector. Unless I made an error, the Hodge star causes a
sign reversal (in the case I was considering) under a parity
transformation. This additional sign due to the Hodge star makes the 2
form D correspond to a vector while the 1 form H corresponds to a
pseudo vector. This seemed to make perfect sense while I was writing
it :)
Eric
PS: The moral to this story is that VECTORS ARE EVIL!! :) Everyone
should start using differential forms and all this "pseudo" nonsense
will disappear once and for all :)
In ``Applied Differential Geometry,'' [1] William Burke claims
that D is not an ``ordinary'' 2-form, but a ``twisted'' 2-form;
``twisted'' 2-forms apparently transform oppositely to ordinary
2-forms under parity. So apparently, Burke does not think one can
dismiss the orientation so cavalierly... (In fact, he appears to
explicitly represent the orientation by introducing two different
Hodge-operators into his 3+1 decompositions of forms: One for 3-D
forms, and one for 4-D forms.)
[1] <http://www.amazon.com/exec/obidos/ASIN/0521269296>
-- Gordon D. Pusch
perl -e '$_ = "gdpusch\@NO.xnet.SPAM.com\n"; s/NO\.//; s/SPAM\.//; print;'
[..]
> >I would choose eps_0 to change, leaving the others as
> >fundamental constants that can be set to unity. But you could
> >also change e. (Although that would be on the pre-2000 level)
>
> Should we reopen the discussion about which to change?
Probably not, because there is no objective way to decide
which makes more sense.
Gerard
> "J. J. Lodder" wrote:
>
> [..]
>
> > Sure, but why drag in these irrelevantia?
> > The physical meaning of eps_0 and mu_0, if any,
> > is an issue on the pre-1900 level.
>
> As you may remember from previous threads, I disagree with this.
Yes I remember. Somebody should write a FAQ on this sometime :-)
> Because eps_0, together with e, h and c fixes the fine-structure
> constant, any change in the fine structure constant (I think it
> changes at very high energies, like just after the big bang)
> will impact either eps_0, e, h or c.
Perhaps it does occur in -your- finestructure constant,
it doesn't occur in mine.
This by itself is sufficient to show that the occurence of eps_0
in alpha is a matter of human convention,
rather than an aspect of nature.
> I would choose eps_0 to change, leaving the others as
> fundamental constants that can be set to unity. But you could
> also change e. (Although that would be on the pre-2000 level)
Of course you can,
(no need even to change mu_0 accordingly,
who cares about yet another arbitrary constant :-)
but it would merely imply
that you choose to describe nature
with a time-dependent unit system.
Possible, but rather inconvenient,
Jan
PS If you really want to create a mess of this kind
you could also define c to be a function of time,
by decreeing that the meter shall be the distance
that light travels in f(t) seconds :-)
> Eric Alan Forgy wrote:
>
> [..]
> > One of the reasons I am such a stickler about saying E and D are not
> > the same comes from experience with numerical solutions to Maxwell's
> > equations. E, being a 1-form, is naturally associated to the edges of
> > some mesh. D, being a 2-form, is naturally associated to the faces of
> > some mesh.
>
> Doe that mean that D transforms as a pseudo-vector?
> E is the force on a charge, so it is a vector (Newton/Coulomb).
> D is the displacement of (imaginary) charges through a surface.
> (Coulomb/m2). It could be a pseudo vector.
This is a very typical example of the misunderstandings
that may arise from erroneous understanding of unit systems.
D is just a partial field,
which arises because we find it convenient
to split the total electric field -mentally- into parts.
Since D is a partial electric field,
it must transform as an electric field.
Jan
Matter is composed of particles,
so on a fundamental level there is just particles
and (microscopic) electric fiels.
Now one can average these to obtain a macroscopic field.
(You may recognise Lorentz' program here)
It is often convenient to split the total macroscopic field
-mentally- into the field that would have been there
without the particles, and the rest.
Both components are electric fields,
even if you happen to prefer to measure one
in different units than the other.
Best,
Jan
> J. J. Lodder wrote:
>
> >Toby Bartels wrote:
>
> >>But now I disagree. Using Heaviside Lorentz units,
> >>I would still have studied Maxwell's equations for dielectric media
> >>and been introduced to the concept of epsilon and mu.
> >>(These quantities would be dimensionless, of course.)
> >>Then I would learn that epsilon and mu for the vacuum are both exactly 1.
> >>How nice!
>
> >Indeed :-) But, being properly educated in this way
> >you would be less likely to make the mistake
> >of thinking of eps = 1 is a physical property of the vacuum.
>
> Well, this *is* how I was educated, more or less --
> I was originally taught in SI units, but I already knew how to
> reduce the number of units by setting constants to 1,
> so I immediately set eps_0 to 1 and thought about things in that way --
> and I *do* think that eps = 1 is a physical property of the vacuum.
> A property of a rather vacuous (ha! an unintentional pun!) sort,
> but a property nonetheless.
Guess we are down to semantics in the meantime.
You prefer to think of eps_0 = 1 as a property of the vacuum,
I cannot possibly see how a quantity which could just as well
have defined to be 37 can be a property of anything
but your particular definitions.
Best,
Jan
snip
> After all that mess, the short answer to the question "Is D a pseudo
> vector?" is apparent. Although D is a 2-form, that does not mean that D is a
> pseudo vector because D is the Hodge dual of a 1-form E and the HODGE STAR
> PICKS UP AN ADDITIONAL SIGN UNDER A PARITY TRANSFORMATION (in this case, but
> not in general).
>
> Thanks for a nice question! It made me think. I hope my answer makes sense.
Of course!
It is an excessively beautiful way
to deduce an excessively trivial result.
Jan
"Gordon D. Pusch" <gdp...@NO.xnet.SPAM.com> wrote:
>
> In ``Applied Differential Geometry,'' [1] William Burke claims
> that D is not an ``ordinary'' 2-form, but a ``twisted'' 2-form;
> ``twisted'' 2-forms apparently transform oppositely to ordinary
> 2-forms under parity. So apparently, Burke does not think one can
> dismiss the orientation so cavalierly... (In fact, he appears to
> explicitly represent the orientation by introducing two different
> Hodge-operators into his 3+1 decompositions of forms: One for 3-D
> forms, and one for 4-D forms.)
>
> [1] <http://www.amazon.com/exec/obidos/ASIN/0521269296>
Good point. This concept of twisted forms is related to another long post I
wrote in this thread. I happen to have a distaste for "twisted" forms. They
are about as bad as "pseudo" vectors. Differential forms are beautiful
objects, why de-beautify them with such un-natural complications? A twisted
p-form is simply the Hodge dual of a regular (n-p)-form. The Hodge star
carries information about the orientation and hence may alter the behavior
under parity transformations. The way Burke presents them is as if they are
something "different" that needs to be learned in addition to regular forms.
I'll have to check to see what he actually says, but he probably could have
saved some ink by simply writing "twisted forms are Hodge duals of regular
forms." Apologies if he DOES say that :)
Eric
PS: ALL of this can be avoided if people just worked with forms properly and
stopped trying to write silly things like B = B_1 dx^23 + B_2 dx^31 + B_3
dx^12, when it should be B = B_23 dx^23 + B_31 dx^31 + B_12 dx^12. The
former is a consequence of the desire to work exclusively with vectors when
bivectors and 2-forms are the appropriate objects. This goes back to the
original subject of the thread, E and D are NOT the same physically or
mathematically (unless you relax the definition of "same" to mean
isomorphic, in which case E could be the same as H in time-harmonic cases,
but that is just semantics and not really worthy of a discussion).
>After all that mess, the short answer to the question "Is D a pseudo
>vector?" is apparent. Although D is a 2-form, that does not mean that D is a
>pseudo vector because D is the Hodge dual of a 1-form E and the HODGE STAR
>PICKS UP AN ADDITIONAL SIGN UNDER A PARITY TRANSFORMATION (in this case, but
>not in general).
>
>Thanks for a nice question! It made me think. I hope my answer makes sense.
One observation that is not strictly related to the original equation,
but, somehow, is relevant:
The relation between D and E, as well as the relation between H and B,
in vacuum, involves Hodge operator on 2-forms. Hodge operators on
2-forms in 4d (as well as hodge operators on n/w forms in nd) is
conformally invariant. Thsi is an easy exercise. But it less known
that Hodge * operator on 2-forms in (-+++) signature , with the
properies *^2=-1 and Hermitian (with respect to the natural inner
product) is sufficient to define uniquely the light cone (=
conformal) structure of space time.
ark
>Toby Bartels wrote:
>>A property of a rather vacuous (ha! an unintentional pun!) sort,
>>but a property nonetheless.
>You prefer to think of eps_0 = 1 as a property of the vacuum,
>I cannot possibly see how a quantity which could just as well
>have defined to be 37 can be a property of anything
>but your particular definitions.
Well, if you redefine epsilon_0 to be 37,
then you redefine all of the other epsilons by a factor of 37 too.
So I guess that none of them are physical properties!
-- Toby
to...@math.ucr.edu
>Toby Bartels wrote:
>>Yeah, given a spacial metric, pseudovector and 2forms are equivalent.
>>Most people turn 2forms into vectors of course,
>>but this requires an orientation in addition to the metric.
>>I do find it more fundamental not even to assume the metric
>>and just to deal with the 2form itself -- in certain contexts.
>In ``Applied Differential Geometry,'' [1] William Burke claims
>that D is not an ``ordinary'' 2-form, but a ``twisted'' 2-form;
>``twisted'' 2-forms apparently transform oppositely to ordinary
>2-forms under parity.
Yes, you are correct. As Eric pointed out in another post,
D corresponds (given a spacial metric) to a spacial vector.
Thus it is, fundamentally, a *pseudo*2form
(or "twisted" 2form, if I understand that term correctly).
Given a metric, the following are equivalent:
p-vector = p-form = pseudo(n-p)vector = pseudo(n-p)form.
Given an orientation, the following are equivalent:
p-vector = pseudo-p-vector;
p-form = pseudo-p-form.
Given a volume form, the following are equivalent:
p-vector = (n-p)form;
pseudo-p-vector = pseudo(n-p)form.
Given a metric and an orientation (and hence a volume form),
then all of the above are equivalent.
-- Toby
to...@math.ucr.edu
>PS: ALL of this can be avoided if people just worked with forms properly
*if* you have a metric (or an orientation) around.
Otherwise you can't get rid of the pseudos.
-- Toby
to...@math.ucr.edu
>D is just a partial field,
>which arises because we find it convenient
>to split the total electric field -mentally- into parts.
>Since D is a partial electric field,
>it must transform as an electric field.
Right, this is why we should see immediately that,
even though we may be in a context where
it's appropriate to distinguish E from D
and even to make D a 2form while E is a 1form,
the underlying physical reality dictates that
D must actually be a pseudo2form,
since this underlying physical reality
provides only a metric and not an orientation.
-- Toby
to...@math.ucr.edu
Yes.
"Duality and conformal structure"
Tevian Dray, Ravi Kulkarni and Joseph Samuel
J. Math. Phys. 30, 1306-1309 (1989).
-charlie
Ok, thanks for expoundng a bit. Now, we are getting some where.
"J. J. Lodder" <nos...@de-ster.demon.nl> wrote:
> Matter is composed of particles,
> so on a fundamental level there is just particles
> and (microscopic) electric fiels.
> Now one can average these to obtain a macroscopic field.
> (You may recognise Lorentz' program here)
> It is often convenient to split the total macroscopic field
> -mentally- into the field that would have been there
> without the particles, and the rest.
It seems you are saying that D is some field that only has meaning as a
macroscopic description of some underlying atomistic phenomenon. I would
charge that even at the microscopic scale that D is just as fundamental of a
field as E, while still being distinct.
Now, of the four fields E,B,H, and D, the fields E and B are actually more
closely related than E and D. Likewise H and D are more closely related than
H and B. E and B are simply rather arbitrary components of a single
geometrical object F. While H and D are simply rather arbitrary components
of the same geometrical object G.
====Entering math zone====
Stepping up the maths for a second, the action for Maxwell's equations can
then be written as:
S = int_M F/\G
Note that this looks a lot like the action for BF-theory, which is no
coincidence. BF-theory with some constraints leads to general relativity.
The above action with some constraints leads to electromagnetic theory.
A more standard, which doesn't mean more correct, way to look at
electromagnetic theory is to start right off declaring that G = #F (# =
Hodge star), then
S = int_M F/\#F.
This is particularly nice, because that is just the global inner product of
forms, i.e.
S = [F,F] := int_M F/\#F = int_M (F,F) vol,
where (F,F) is the local inner product of forms. Varying the action with
respect to A leads to
del F = 0,
where del is the adjoint of d, i.e.
[F,F] = [dA,F] = [A,del F].
Maxwell's equations without sources can then be stated simply as
dF = 0 (which is almost a tautology because F = dA)
and
del F = 0
(with sources, the second equation becomes del F = j).
There are several ways to interpret all of this. Here are two that come to
mind:
1.) You can begin with S = int_M F/\G with constraints (akin to BF-theory)
in which case you will find that
dF = 0
and
dG = 0,
with the constitutive relation G = #F, (with sources, you have dG = J, where
J is a source 3-form). Note that in this theory, # may be degenerate. In
this case, saying that E and D are the same amounts to saying F and G are
the same. However, this is like saying B and F are the same in BF-theory,
which I wouldn't do.
2.) You can begin with S = [F,F] := int_M F/\#F in which case you will find
that
dF = 0
and
del F = 0,
where del is the adjoint of d, (with sources, you have del F = j, where j is
a source 1-form). In this case, there IS NO G. This amounts to not even
having D or H in the model at all, so the question of whether E is the same
as D is moot :)
====End of math zone====
Sorry about that :)
I can appreciate the atomistic viewpoint of D. In fact, you can probably say
that most, if not all, EM treatise present D like that. However, I am also
of the opinion that physics is not a democracy, so I tend to give very
little weight to what the majority says (i.e. counting the number of
publications on a certain subject as an indication of the physical validity
of that subject is nonsense) and give the weight to the physics and
mathematics :)
Mathematics has a voice, and I think we should listen to it. If the voice of
differential geometry/differential forms has any relevance to physics,
electromagnetic theory in particular, then that voice says that E and D are
different mathematical objects (even at the microscopic level) that are
related through the Hodge star.
Here is just a thought (that will reveal the exotic things that run through
my mind). Magnetic charges are somewhat contraversial entities, but if they
were to exist, they appear as purely topological beasts, i.e. as points
removed from the topological space.
Electrons (fermions) are much less contraversial entities, but it may be
that even these guys are also topological beasts. For instance, this paper
Fermions and Topology
Authors: Lee Smolin
http://xxx.lanl.gov/abs/gr-qc/9404010
(and check out the "refers to" and "cited by" as well) suggests that
fermions are actually exotic topological objects, i.e. wormholes. To say it
in less sci-fi terms, it is possible that fermions result from the
identification of distinct points in space-time, where the electron is one
end of the wormhole and the positron is the other.
Thus, even the atomistic viewpoint may involve complicated topological
spaces and quantum gravity *shudder*. Hence, any book that claims to
understand the distinction (or non-distinction) between E and D is
automatically suspect. No matter what turns out to be the "true" explanation
for all this stuff, I am fairly certain that differential forms *in one way
or another* will survive (the emphasis is there because it may be that
space-time is not even a smooth manifold, but even then you will probably be
able to construct some kind of algebraic version of differential forms that
survives).
I think your post answered my question. We just think of things differently
and that is fine. I was afraid I had made some blunder :) I may have an
unpopular view of things, but that is nothing new to me, so I'll keep on
going the way I have been :)
Thanks,
Eric
Also earlier: "Electromagnetic permeability of the vacuum and the
light cone structure"
A. Jadczyk, Bull. Acad. Pol. Sci. 27 (1979) 91-94.
Online at http://www.cassiopaea.org/quantum_future/emp.htm with extra
info.
ark
> J. J. Lodder wrote:
>
> >Toby Bartels wrote:
>
> >>A property of a rather vacuous (ha! an unintentional pun!) sort,
> >>but a property nonetheless.
>
> >You prefer to think of eps_0 = 1 as a property of the vacuum,
> >I cannot possibly see how a quantity which could just as well
> >have defined to be 37 can be a property of anything
> >but your particular definitions.
>
> Well, if you redefine epsilon_0 to be 37,
> then you redefine all of the other epsilons by a factor of 37 too.
> So I guess that none of them are physical properties!
No, the relative ones, water at eps_r = 80 etc,
stay the same,
Jan
But there is:
In the laws of physics as we know them,
whatever choice for eps_0 you make,
it is only the combination e^2 / eps_0 that occurs.
(And eps_0 E^2, of course, leaving force eE , eps_0 free)
You apparently want to see some meaning for eps_0 by itself,
without a corresponding e^2.
Therefore it is up to you to elucidate
which law of physics you want changed,
and what the -physical- (that is observable)
consequences will be.
As long as you don't you have said nothing,
physicaly speaking,
except perhaps that you prefer other units,
Jan
> Gordon D. Pusch wrote:
>
>> Toby Bartels wrote:
>
>>> Yeah, given a spacial metric, pseudovector and 2forms are equivalent.
>>> Most people turn 2forms into vectors of course,
>>> but this requires an orientation in addition to the metric.
>>> I do find it more fundamental not even to assume the metric
>>> and just to deal with the 2form itself -- in certain contexts.
>
>> In ``Applied Differential Geometry,'' [1] William Burke claims
>> that D is not an ``ordinary'' 2-form, but a ``twisted'' 2-form;
>> ``twisted'' 2-forms apparently transform oppositely to ordinary
>> 2-forms under parity.
>
> Yes, you are correct. As Eric pointed out in another post,
> D corresponds (given a spacial metric) to a spacial vector.
> Thus it is, fundamentally, a *pseudo*2form
> (or "twisted" 2form, if I understand that term correctly).
Assuming *I* understand Burke's terms correctly, then I believe you do... :-/
Insofar as I can figure it out from his prose and figures, Burke's
``twisting'' of geometric objects appear to associate ``circulations''
with them instead of ``directedness;'' hence, a ``twisted'' vector or
1-form has a ``screw-sense'' instead of an ``arrowhead,'' a ``twisted''
bi-vector or 2-form has a ``circulation'' around its plaquette or
``egg-crate'' cell, etc. I can't visualize what sort of ``circulation''
a ``twisted'' tri-vector or 3-form would have --- and apparently neither
can Burke,because he doesn't have an illustration of one in his book... :-/
>Toby Bartels wrote:
>>J. J. Lodder wrote:
>>>I cannot possibly see how a quantity which could just as well
>>>have defined to be 37 can be a property of anything
>>>but your particular definitions.
>>Well, if you redefine epsilon_0 to be 37,
>>then you redefine all of the other epsilons by a factor of 37 too.
>>So I guess that none of them are physical properties!
>No, the relative ones, water at eps_r = 80 etc,
>stay the same,
So you have just wasted your time defining eps_0 to be 37,
switching to relative epsilons, and getting back where you started.
Certainly one *can* do this, but it's silly!
And now esp_r = 1 is a physical property of the vacuum.
-- Toby
to...@math.ucr.edu
Nice. I will keep your result showing how Hodge * on 2-forms determines
conformal structure in mind next time the issue comes up, which it
does from time to time.
As long as I am writing, I might mention (changing the subject entirely)
that your series of papers with Coquereaux and the corresponding book (all
from long ago) on Kaluza-Klein reductions have been very useful to us here
at USU since we are working on various aspects of symmetry reduction in
gravitational theories. I would highly recommend them to anyone who is
interested in such things.
-charlie
snip agreements
> I think your post answered my question. We just think of things differently
> and that is fine. I was afraid I had made some blunder :) I may have an
> unpopular view of things, but that is nothing new to me, so I'll keep on
> going the way I have been :)
We appear to agree that (as far as is known now)
all of the physics (so far) can be done with E only,
D being an averaged macroscopic E-field.
We also agree that you can build mathematical structures
in which E and D appear differently.
Indeed, these mathematical structures -may- point to new
(as yet undiscovered) aspects of reality.
Reformulating theories in as many way as possible
is indeed a way to see new possible openings.
But, until you manage to find some physical content,
the added formalism remains extra overhead.
(and with Ockham against it :-)
The D=pseudo? in this thread is a case in point:
trivially wrong in the standard view,
arguments (with possibility for errors) with the extra formalism.
But, of course, some new physics may come out of it eventually.
Good luck with your quest,
Jan
>Toby Bartels <to...@math.ucr.edu> writes:
>> Yes, you are correct. As Eric pointed out in another post,
>> D corresponds (given a spacial metric) to a spacial vector.
>> Thus it is, fundamentally, a *pseudo*2form
>> (or "twisted" 2form, if I understand that term correctly).
>Assuming *I* understand Burke's terms correctly, then I believe you do... :-/
>Insofar as I can figure it out from his prose and figures, Burke's
>``twisting'' of geometric objects appear to associate ``circulations''
>with them instead of ``directedness;''
Hmm, and I thought we were talking about "twisting" in the usual
mathematical sense of "tensoring with a line bundle": we are
taking the bundle whose sections are p-forms and tensoring it
with the line bundle whose sections are pseudoscalars, to get
a new bundle whose sections are "twisted p-forms". People
also call this sort of trick "twisting by a line bundle".
Twisting by a line bundle is a handy way to get new bundles from
old ones. In physics we do this a lot using the line bundle whose
sections are "densities". Twisting by one of this bundle is
also called "densitizing". So you'll also see people, especially
in general relativity, running around talking about "densitized
vector fields", "doubly densitized 2-forms", and the like.
I don't know Burke or his book. If Burke is a fancy-ass mathematical
guy, he was probably talking about twisting by a line bundle. If
he is a down-to-earth physical guy, he was probably alluding to
what you're talking about - the fact that pseudovectors describe
"circulation" rather than "direction".
If you cheat somewhat and draw a 2-form as a little "oriented
plaquette", you should draw one of these twisted 2-forms as an
"unoriented plaquette".
("Cheat somewhat" because it's really bivectors that look like
oriented plaquettes, and even then, it's only the decomposable
ones that really look like that - i.e. those of the form v ^ w.)
> In article <gizo6mx...@pusch.xnet.com>,
> Gordon D. Pusch <gdp...@NO.xnet.SPAM.com> wrote:
>> Insofar as I can figure it out from his prose and figures, Burke's
>> ``twisting'' of geometric objects appear to associate ``circulations''
>> with them instead of ``directedness;''
> Hmm, and I thought we were talking about "twisting" in the usual
> mathematical sense of "tensoring with a line bundle": we are
> taking the bundle whose sections are p-forms and tensoring it
> with the line bundle whose sections are pseudoscalars, to get
> a new bundle whose sections are "twisted p-forms". People
> also call this sort of trick "twisting by a line bundle".
>
> Twisting by a line bundle is a handy way to get new bundles from
> old ones. In physics we do this a lot using the line bundle whose
> sections are "densities". Twisting by one of this bundle is
> also called "densitizing". So you'll also see people, especially
> in general relativity, running around talking about "densitized
> vector fields", "doubly densitized 2-forms", and the like.
Yes, Burke explicitly states that ``twisted'' forms are ``densities.''
> I don't know Burke or his book. If Burke is a fancy-ass mathematical
> guy, he was probably talking about twisting by a line bundle. If
> he is a down-to-earth physical guy, he was probably alluding to
> what you're talking about - the fact that pseudovectors describe
> "circulation" rather than "direction".
Burke appears to be a ``fancy-ass mathematical guy'' who is trying to
write to the ``down-to-earth physical guys'' and ``convert'' them to the
``fancy-ass mathematics'' way of doing things, e.g., differential forms,
line bundles, and all that. (The dedication of his book reads: ``To all
those who, like me, have wondered how in hell you can change $\dot{q}$
without changing $q$.'')
(snip descriptions of how to separately measure E and flux of D)
>> You may choose a unit system in which D = E in vacuum in Euclidean
>> space. Suppose that we do this. If the experiment above has
>> been performed in Euclidean space, then the total flux of E through
>> the boundary will also give the charge contained in the volume.
>>
>> But in a curved space, the flux of E though the surface will *not*
>> generally give the charge enclosed, unless it happens to be 0.
>> So, we can conclude that E cannot equal D at every point on that
>> surface.
>
>Let's not drag curved spaces into this discussion.
On the contrary -- they are essential to the point I was trying to
make! If we restrict the situation to Euclidean spaces, then I
would have to agree with your original statement: that if sensible
units where D = E were chosen long ago, there would have never been
any reason to introduce the constant epsilon_0. I disagree with
this strongly: epsilon_0 will still show up in some guise or
another when electromagnetic experiments are performed in situations
where the curvature of space changes with position.
There are even easier experiments to measure D and E than the ones I
proposed, in Bamberg and Sternberg's "A Course in Mathematics for
Students of Physics." Measure the kinetic energy change imparted to
various charged particles when they have moved in various directions.
In the limit of small charges and short distances, the ratio of this
energy change to the product of the distance traveled and the charge
is the (component of the) electric field E (in the direction traveled).
Now, take two very thin conducting sheets of metal of equal area, touch
them together, and bring them back apart. Measure the charge on
each plate, and divide by the area of the plate. In the limit as this
area becomes very small, this number is the component of D (oriented
with the plates' orientation).
My point is that: if units are chosen such that the magnitudes of
D and E are equal in a flat space, then they will NOT be equal,
using the exact same procedures, in certain locations in curved
spaces. And if they are found to be equal at some point in a space of
varying curvature, then they will not generally be equal at another
point in the same space.
I hope that the above experiments make the difference between D and
E very clear: E is associated with the direction "radial" to a
point charge, while D is associated with the two *transverse*
directions. (Going between the two is the role performed by
the Hodge star operator.)
If we never consider non-flat spaces, then I agree that D and E can
always be chosen to have the same magnitude in vacuum. But that's like
trying to argue that gauge fields can have no physical meaning -- by
restricting oneself to gauge fields that are "pure gauge" only! Not
fair.
In another article in this thread, J. J. Lodder wrote:
> Likewise, you would not claim that a particle
> actually has two positions, x_\mu and x^\nu,
> because a covariant vector is something entirely different
> than a covariant one, mathematically speaking.
The metric is certainly used to raise and lower indices on a
position vector. The metric can also be used to get D from E
in vacuum and vice-versa. So I think I can see your point here:
that knowledge of g allows us to do either.
But I think that the example may be somewhat misleading beyond
that, for two reasons. The first is that D and E are *not* simply
related to each other by raising and lowering indices! The Hodge
star operator is also involved (although g determines it by providing
a preferred way to measure volumes). The second reason is that
although I cannot think of a way to experimentally measure a
particle's covariant versus its contravariant position, the above
experiments are conceptually and operationally *different* ways
of getting numbers for D and E out.
And in another article:
>D is just a partial field,
>which arises because we find it convenient
>to split the total electric field -mentally- into parts.
Now, this I do not agree with at all! Maxwell's equations show
quite clearly that the way E can be derived from a four-potential
is *very* different from the way D can be, for instance.
"John Baez" <ba...@galaxy.ucr.edu> wrote:
> Twisting by a line bundle is a handy way to get new bundles from
> old ones. In physics we do this a lot using the line bundle whose
> sections are "densities". Twisting by one of this bundle is
> also called "densitizing". So you'll also see people, especially
> in general relativity, running around talking about "densitized
> vector fields", "doubly densitized 2-forms", and the like.
Since you mention it, I'll go ahead and ask a question that's been bugging
me for a while. In the "modern" canonical formulation of general relativity,
you have a Lie-algebra valued 1-form (connection) A and a densitized Lie
algebra valued (n-2)-form E.
One thing that has bothered me, and was re-enforced while discussing the
senselessness of "pseudo" and "twisted" differential p-forms was that both
of these are simply Hodge duals of regular p-forms. I don't care for the
additional adjectives "pseudo" and "twisted" because it seems to imply that
these are somehow something "different" that needs to be learned in addition
to regular forms. This is not true. Regular forms are all you need to learn,
and these other beasts are just their Hodge duals. The additional
"complication" that seems to be necessary when people start throwing around
terms like pseudo and twisted forms makes it seem like a burden rather than
a blessing to learn differential forms. For someone like me who is trying to
convince engineers to use differential forms, this is a big problem. Forms
are simple and beautiful!! Why mess them up?! Maybe it is a conspiracy by
physicists and mathematicians. They don't want engineers to know how simple
all this "fancy-ass mathematics" really is ;)
Anyway, my complaint against the use of "pseudo" and "twisted" adjectives to
describe Hodge duals of regular forms seems to apply to densitized forms as
well. But this is probably a more serious complaint because it goes at the
heart of canonical QG. Instead of using a 1-form connection A and densitized
(n-2)-form
E, why not just use a regular 2-form?! What is being called the densitized
(n-2)-form E, is just the Hodge dual of a regular 2-form. In fact, E seems
to be just the Hodge dual of the curvature F = DA itself! So there really
aren't two different fields E and A, there is one field F and the Hodge star
*. I don't know which is easier, to subsume the Hodge into a new field E =
*F and vary E and A independently, or to just vary A and * (or maybe vol).
For instance, the Langrangian for EF-theory (going with the suggestion in
gr-qc/9905087) is given by
L = tr(F/\E),
but why not just write this as
L = tr(F/\*F) = (F,F) vol
? Then you can vary A as usual, but then you could also vary either * or
vol. Is there really some good reason for dealing with "densitized pseudo
twisted differential forms" that I am just not aware of, or is this an
example of academic inertia, "Because that's the way it's always been around
here." (http://www.ccem.uiuc.edu/ericf/apes.html)
Eric
> J. J. Lodder wrote:
> >Toby Bartels wrote:
> >>J. J. Lodder wrote:
> >>>I cannot possibly see how a quantity which could just as well
> >>>have defined to be 37 can be a property of anything
> >>>but your particular definitions.
> >>Well, if you redefine epsilon_0 to be 37,
> >>then you redefine all of the other epsilons by a factor of 37 too.
> >>So I guess that none of them are physical properties!
> >No, the relative ones, water at eps_r = 80 etc,
> >stay the same,
> So you have just wasted your time defining eps_0 to be 37,
> switching to relative epsilons, and getting back where you started.
> Certainly one *can* do this, but it's silly!
Indeed! :-)
You are a hundred years late,
you should have told these 'practical' types who insisted
on an eps_0 != 1 so, when the situation was still open.
And:
8.854 187 817 ... * 10^-12 F/m
is of course far more practical than 37 :-)
> And now esp_r = 1 is a physical property of the vacuum.
Still isn't,
for eps_{relative} is defined to be
eps_{material} / eps_{vacuum},
so eps_r (vacuum) is not a physical property of the vacuum,
it's a tautology,
Jan
[..]
> My point is that: if units are chosen such that the magnitudes of
> D and E are equal in a flat space, then they will NOT be equal,
> using the exact same procedures, in certain locations in curved
> spaces. And if they are found to be equal at some point in a space of
> varying curvature, then they will not generally be equal at another
> point in the same space.
So eps_0 will be a function of position.
Would you say the same is true for the elementary charge (e) ?
I ask this because I believe if the fine structure constant (e^2/eps_0)
is a function of space, this will express itself in a varying eps_0,
but not necessarily in a varying e.
Gerard
>Since you mention it, I'll go ahead and ask a question that's been bugging
>me for a while. In the "modern" canonical formulation of general relativity,
>you have a Lie-algebra valued 1-form (connection) A and a densitized Lie
>algebra valued (n-2)-form E.
>One thing that has bothered me, and was re-enforced while discussing the
>senselessness of "pseudo" and "twisted" differential p-forms was that both
>of these are simply Hodge duals of regular p-forms.
What Hodge dual?
twisted (n-p)forms are equivalent to p-forms
*if there is a metric*.
>For instance, the Langrangian for EF-theory (going with the suggestion in
>gr-qc/9905087) is given by L = tr(F/\E),
>but why not just write this as L = tr(F/\*F) = (F,F) vol?
What's vol? Now you need a metric *and* an orientation!
(Actually, you only need a metric --
if you're willing to make vol a *twisted* n-form.)
-- Toby
to...@math.ucr.edu
>Eric Forgy wrote:
>>Since you mention it, I'll go ahead and ask a question that's been bugging
>>me for a while. In the "modern" canonical formulation of general relativity,
>>you have a Lie-algebra valued 1-form (connection) A and a densitized Lie
>>algebra valued (n-2)-form E.
>>One thing that has bothered me, and was re-enforced while discussing the
>>senselessness of "pseudo" and "twisted" differential p-forms was that both
>>of these are simply Hodge duals of regular p-forms.
>What Hodge dual?
>twisted (n-p)forms are equivalent to p-forms
>*if there is a metric*.
Exactly. Of course, Eric might reply "But this is general relativity!
There *is* a metric!" And then you'd have to say: "Yes, but it's a
dynamical variable, not a background field! In other words, the
metric depends on the E field, and the associated Hodge star operator
depends on the E field in an even more complicated way... so if you
have expressions involving A, E and the Hodge star operator, to
really see their dependence on A and E you'd have to express the
Hodge star operator in terms of the E field, and this is a MESS!
Even worse, when the E field is degenerate, e.g. when it vanishes,
the metric is degenerate, so the Hodge star operator becomes ill-defined!
In short, this is an example where you'd pay a fearsome price for
trying to use the metric to "cheat" and identify sections of one
bundle with sections of another!"
In short: differential forms are great, but let us not enslave
ourselves to them - nor any other formalism. Use a tool when it's
the right one for the job, not simply out of habit.
"Toby Bartels" <to...@math.ucr.edu> wrote:
> Eric Forgy wrote:
> What Hodge dual?
> twisted (n-p)forms are equivalent to p-forms
> *if there is a metric*.
Ok, I have to complain a little bit here. It is not a big deal, but it is
something that has caused me numerous arguments in the past, and I think it
is the fault of you category theorists in this group that I've grown to have
an appreciation for :)
One of the (many) lessons I've learned on this newsgroup is that ISOMORPHIC
does not mean EQUAL!! If you mean p-forms and twisted p-forms are
"equivalent" in the sense of forming an equivalence class, where A ~ B if
there is an invertible map taking A to B and vice versa, that is fine. I
agree that p-forms and twisted (n-p)-forms are equivalent if there is a
metric. I would like to note that I think that is "bad form" though ;) Then
again, I am still enough of a rookie that it might be the case that when
there exists an isomorphism between two NON-EQUAL vector spaces, then you
call them equivalent. If so, then I humbly take back my complaint :)
> >For instance, the Langrangian for EF-theory (going with the suggestion in
> >gr-qc/9905087) is given by L = tr(F/\E),
> >but why not just write this as L = tr(F/\*F) = (F,F) vol?
>
> What's vol? Now you need a metric *and* an orientation!
> (Actually, you only need a metric --
> if you're willing to make vol a *twisted* n-form.)
vol is an n-form such that when you integrate vol over some n-chain, the
result is the volume of that n-chain. I'm asking (among other things), "Why
not make vol a fundamental variable?"
By the way, I'd still like to know if "Densitized Pseudo Twisted Forms" are
really necessary :)
Eric
Curvature of space-time can do much worse than make eps_0 a function of
position. It makes eps_0 into a tensor! :) Yet another reason why it is
not healthy to say E and D are the same. But it is even worse than that.
Curvature of spacetime can make the vacuum appear to a bianisotropic, i.e.
D can be a function of both E AND B! I do not think that will have
anything to do with the fundamental charge. I think e is "deep down" a
topological quantity and does not depend in any way on the metric, where
the constitutive relation are certainly tied up with the space-time
metric.
I believe there is still a LOT to be learned from deep thinking about the
nature of electromagnetism. The book is not closed on EM yet (as opposed
to what some others may think).
Here is an interesting bit of trivia (well known to experts, but maybe not
so well known to everyone). If you take the Schwarzschild metric of a
black hole, you can completely encode the space-time curvature into an
effective epsilon and mu in flat space-time. In other words, EM radiation
in a spherically symmetric background metric is equivalent to a spherically
symmetric isotropic dielectric/magnetic material.
Eric
I'd rather say that, when n=3, a twisted 2-form can be drawn as
a 2-d plaquette with a "pseudo-orientation" given by an arrow
orthogonal to it (note that I'm assuming the existence of an
"ambient space" of dimension 3 and that I used the term
"pseudo-orientation" in a deliberately sloppy way).
More generally, let us assume we have an oriented n-dimensional
manifold M along with a p-dimensional oriented submanifold N of
M. Then I guess that (and I say I guess because I don't know
Burke's book either) an ordinary p-form in N can be "drawn" as
a p-plaquette oriented with the help of the N orientation while
a "twisted" p-form can be drawn as a p-plaquette oriented with
the help of the orientation of the normal bundle of N.
>("Cheat somewhat" because it's really bivectors that look like
>oriented plaquettes, and even then, it's only the decomposable
>ones that really look like that - i.e. those of the form v ^ w.)
Yes, but every n-1 form is decomposable and so, when n=3, don't
have to worry about that.
pollux.
>Toby Bartels wrote:
>>So you have just wasted your time defining eps_0 to be 37,
>>switching to relative epsilons, and getting back where you started.
>>Certainly one *can* do this, but it's silly!
>Indeed! :-)
>You are a hundred years late,
>you should have told these 'practical' types who insisted
>on an eps_0 != 1 so, when the situation was still open.
Believe me, I'd have said it if I could have.
>And: 8.854 187 817 ... * 10^-12 F/m
>is of course far more practical than 37 :-)
At least it now has an exact value of
625000/22468879468420441pi F/m ^_^.
>>And now esp_r = 1 is a physical property of the vacuum.
>Still isn't,
>for eps_{relative} is defined to be
>eps_{material} / eps_{vacuum},
>so eps_r (vacuum) is not a physical property of the vacuum,
>it's a tautology,
I already said that it was a rather vacuous property.
-- Toby
to...@math.ucr.edu
You can of course believe whatever you want,
but that doesn't make it a physical theory,
not even a speculative new one.
But frankly, I don't see the point you're trying to make.
You seem to agree by now hat -in standard physics-
eps_0 is nothing but a superfluous additional constant,
like the inch to the mile, that could have been avoided
by more auspicious choices in the distant past.
On the other hand, stating that theories can be constructed
with a non-trivial eps_0 is a trueism:
Just replace E^2 in the Lagrangean by eps_0(r,t) E^2,
for example.
As it is there is no evidence whatsoever
of necessity of such a postulate.
There is experimental evidence on the other hand
(homogeneity and isotropy of space)
that puts experimental limits
on what changes would be posssible.
Untill somebody puts forward new -physical- theories,
with new experimental consequences,
or untill new experimental evidence turns up by accident,
that is all there is to say about the matter.
Best,
Jan
>Toby Bartels wrote:
>>Twisted (n-p)forms are equivalent to p-forms
>>*if there is a metric*.
>One of the (many) lessons I've learned on this newsgroup is that ISOMORPHIC
>does not mean EQUAL!!
*Have* you learned this? You are the one that keeps saying that
pseudo(n-p)forms "are just the Hodge duals of" p-forms.
Prima facie, that seems to be claiming equality between them
(I generously interpreted it to mean mere equivalence).
So then I find myself in the position of having to explain that
although the space of the one may be isomorphic to the space of the other,
they are nevertheless not equivalent -- at least not in general.
My saying that they were at least equivalent in the presence of a metric
was a partial *concession* to you.
>Then
>again, I am still enough of a rookie that it might be the case that when
>there exists an isomorphism between two NON-EQUAL vector spaces, then you
>call them equivalent. If so, then I humbly take back my complaint :)
Not quite that, but not quite equality either.
I suppose that I should explain what I do mean.
Consider the case of a vector space V and its dual V*.
(You may take V to be the tangent space to a manifold at some point,
in which case V* is the cotangent space at that point.)
Now V and V* are isomorphic -- they have the same dimension.
By choosing a basis for V, you automatically get a dual basis for V*,
and this immediately defines a specific isomorphism between them.
However this isomorphism is not natural -- it depends on the basis,
which is arbitrary and not uniquely determined by V itself.
Thus, it's not legitimate to consider V and V* equivalent.
On the other hand, suppose that V is equipped with
a nondegenerate bilinear form g
(such as happens to tangent spaces on a manifold with a metric).
Then g immediately defines a map from V to V*,
which is an isomorphism because g is nondegenerate.
You didn't have to choose a basis,
and it's not even that g uniquely determined a basis --
what matters is that g uniquely determines
a *specific* isomorphism between V and V*
that doesn't require any arbitrary choices.
So now it *is* legitimate to consider V and V* equivalent,
since you're starting from the structure (V,g)
and not merely the vector space V alone.
What exactly is "natural", and what is "arbitrary"?
These notions can be made precise with category theory,
using the concept of isomorphism between functors
(aka "natural isomorphism" or "natural equivalence").
It's good form to consider unequal things equivalent
if a natural equivalence exists, and bad form otherwise.
See Week 78 of TWF for more information.
>>What's vol? Now you need a metric *and* an orientation!
>>(Actually, you only need a metric --
>>if you're willing to make vol a *twisted* n-form.)
>vol is an n-form such that when you integrate vol over some n-chain, the
>result is the volume of that n-chain.
And what is that volume? You need structure
on the manifold (such as a metric) to tell me!
And you also need to orient that n-chain to make vol an n-form --
else you only determine it up to sign and it's twisted.
>I'm asking (among other things), "Why
>not make vol a fundamental variable?"
Well, vol can be defined as the Hodge dual of the 0form 1.
And the Hodge dual is already determined by E,
in a horrendously complicated way as John mentions.
Perhaps there is a formulation that makes vol fundamental
and then derives the correct relationship between E and vol
as one of the equations of motion.
But even if this can be done (and perhaps it can!),
still that's no argument against the other formulations.
>By the way, I'd still like to know if "Densitized Pseudo Twisted Forms" are
>really necessary :)
It seems that, at least in our context,
"pseudo" and "twisted" mean the same thing.
Although John suggests that "twisted" can be used more generally,
to twist by whatever line bundle happens to strike your fancy,
the book you read apparently just twisted by the "pseudo"bundle.
Conversely, if you do wish to twist over another bundle,
then you don't need to add "pseudo" in by hand,
since you can add that to the line bundle before you twist.
Furthermore, the same thing can be done with the "densitized" --
that's just another line bundle to twist over.
So no, mere "twisted forms" are sufficient --
with a sufficiently general interpretation of "twisted" -_^.
And yes, they *are* necessary if that's what you have!
Twisted forms, twisted over whatever line bundle,
are *isomorphic* to ordinary forms
(more precisely: the spaces of them are isomorphic),
but it's bad form to consider them equivalent
unless the structure of the manifold defines
a *natural* isomorphism between them.
And sometimes it just doesn't!
-- Toby
to...@math.ucr.edu
"John Baez" <ba...@galaxy.ucr.edu> wrote:
> Toby Bartels <to...@math.ucr.edu> wrote:
> >Eric Forgy wrote:
>
> >>Since you mention it, I'll go ahead and ask a question that's been
bugging
> >>me for a while. In the "modern" canonical formulation of general
relativity,
> >>you have a Lie-algebra valued 1-form (connection) A and a densitized Lie
> >>algebra valued (n-2)-form E.
>
> >>One thing that has bothered me, and was re-enforced while discussing the
> >>senselessness of "pseudo" and "twisted" differential p-forms was that
both
> >>of these are simply Hodge duals of regular p-forms.
>
> >What Hodge dual?
> >twisted (n-p)forms are equivalent to p-forms
> >*if there is a metric*.
>
> Exactly. Of course, Eric might reply "But this is general relativity!
> There *is* a metric!" And then you'd have to say: "Yes, but it's a
> dynamical variable, not a background field!
One of the most frustrating things for me is trying to communicate my
questions properly. I really did try, but I guess I didn't do very well :(
I think that one of the lessons of the modern canonical formulation of GR is
that it really DOES matter what variables you use. Some are definitely
better than others. Of course the theory should not depend on the chosen
variables, but a bad choice of variables complicates things to the point
that solutions seem intractible.
Although I am not a researcher in QG, I have read enough papers to
appreciate the fact that the metric should NOT be used as a fundamental
variable. So, I actually wouldn't reply "There *is* a metric!" because I
think the metric is over rated :) I'd say, "There *is* a volume form!"
> In other words, the
> metric depends on the E field, and the associated Hodge star operator
> depends on the E field in an even more complicated way... so if you
> have expressions involving A, E and the Hodge star operator, to
> really see their dependence on A and E you'd have to express the
> Hodge star operator in terms of the E field, and this is a MESS!
My original question was "Why E?"
You have a connection A and its corresponding curvature F. Then you also
have this densitized (n-2) form E with action
S = int_M tr(F/\E)
To me, it seems like E = *F so you've actually got
S = int_M tr(F/\*F) = int_M (F,F) vol
So my question is, why do you choose A and E as your fundamental variable?
And more generally, why do we ever really need to care about densitized,
pseudo, twisted (DPT) forms in the first place?
I believe you when you say the Hodge star is ugly when expressed in terms of
E, but why would anyone want to do that? How about vol? In 4d, I believe
that E is nothing but vol restricted to 2d surfaces. It seems to me that A
and vol are perfectly good candidates for fundamental variables and this E
is just some kind of restriction of vol to 2d surfaces, i.e. integrating E
over a surface gives the oriented area of that surface.
> Even worse, when the E field is degenerate, e.g. when it vanishes,
> the metric is degenerate, so the Hodge star operator becomes ill-defined!
> In short, this is an example where you'd pay a fearsome price for
> trying to use the metric to "cheat" and identify sections of one
> bundle with sections of another!"
Down with the metric! :) The metric is not the last word. Maybe there is a
alternative pretty way to do things. When things start getting ugly, that is
a sign you're walking down the wrong road.
> In short: differential forms are great, but let us not enslave
> ourselves to them - nor any other formalism. Use a tool when it's
> the right one for the job, not simply out of habit.
I would agree if I thought differential forms were merely a mathematical
tool. However, I don't look at them that way. Integration is fundamental to
physical processes. In fact, I have made arguments that ANY physical
measurement MUST involve integration in some form or another. So far, my
hypothesis stands. I'd be glad if anyone reading this could offer up a
counter example.
=================================
CHALLENGE:
Come up with a physical measurement that does not involve integration in one
form or another.
=================================
If my conjecture is correct, you won't be able to :)
For example, any visual measurement involves integration. Say, measuring the
length of something using a meter stick. A visual measurement involves
integration because in this case, your eyes become part of the measuring
apparatus. To see the meter stick, the Poynting vector of the light from the
meter stick was integrated over the surface of the rods and cones in your
eyes to produce a "number" or signal that is then processed by your brain.
Same thing with audible measurement.
Voltage measurements involve integration over finite probes to produce a
number that can then be displayed on a digital display. Most physics lab
measurements (all of them that I could think of) involve voltage
measurements at the most fundamental level. Any measurement always come
downs to reading off numbers, and the process of getting those numbers
always involves integration
*begin big claim*
Hence, I do not think of differential forms as a mere mathematical tool that
comes in handy now and then. I think of differential forms as speaking
directly to the physical world and as such, should hold some kind of special
place in the mathematical arsenal of physicists. If something cannot be
written as an integral, it has no place in physics.
*end big claim*
If there is any truth to this claim, then integrands are fundamental to
physics and any physically measureable quantity should be expressed as an
integrand, i.e. a differential form.
Eric
>Although I am not a researcher in QG, I have read enough papers to
>appreciate the fact that the metric should NOT be used as a fundamental
>variable. So, I actually wouldn't reply "There *is* a metric!" because I
>think the metric is over rated :) I'd say, "There *is* a volume form!"
Well, like I already said, if the E field is zero, this volume form
will be ZERO. This means you can't use it to define a Hodge star
operator. And that means that you must avoid the Hodge star operator
when doing canonical quantum gravity with A and E.
Is this enough to convince you of the error of your ways yet, or do
I have to get tough? :-)
>My original question was "Why E?"
That's a pretty vague question, but maybe it'll help if I punch
more holes in what you wrote:
>You have a connection A and its corresponding curvature F. Then you also
>have this densitized (n-2) form E with action
>
>S = int_M tr(F/\E)
>
>To me, it seems like E = *F
Well, that's just not true! You just wrote down the action for
EF theory. If you work out the equations of motion you don't
get any equation expressing E in terms of F; you get
F = 0
and
d_A E = 0
Moreover, you certainly don't get any equation involving the
Hodge star operator, because you can't even DEFINE a Hodge star
operator in this theory - there ain't no metric, there's just a
thing like a "volume form" which however is sometimes zero, namely
tr(E ^ E).
Similar remarks hold for various Ashtekar-like formulations of
general relativity.
>S = int_M tr(F/\*F) = int_M (F,F) vol
This is some completely *different* action: it's the action
for Yang-Mills theory. We can write down this action only in
situations where we have a metric - which is needed not only to
define "vol" but more importantly, the inner product ( , ).
>So my question is, why do you choose A and E as your fundamental variables?
'Cause it works nice.
>And more generally, why do we ever really need to care about densitized,
>pseudo, twisted (DPT) forms in the first place?
Because, dagnab it, we can only identify them with ordinary differential
forms in contexts where we have an orientation and a nondegenerate metric!
In Ashtekar gravity and BF theory, we don't have the latter!
Someday you'll learn all the finite-dimensional irreps of GL(n) and
you will see that not just tensors but densitized tensors and
densitized pseudotensors play a natural role in physics. We will
drag you kicking and screaming to this realization if need be. :-)
[on the physical relevance of differential forms]
> =================================
> CHALLENGE:
> Come up with a physical measurement that does not involve integration in one
> form or another.
> =================================
>
> If my conjecture is correct, you won't be able to :)
I think this conjecture is pretty clearly true in quantum field theory;
the fields are all operator-valued densities-- a peculiar kind of form,
I suppose-- that do not really have well-defined finite values at a
point; you have to integrate some combinations of them over a piece of
space-time to get observable operators with real-number spectra.
--
Matt McIrvin
On Mon, 29 Oct 2001, Toby Bartels wrote:
> Eric Forgy wrote in part:
> >Toby Bartels wrote:
>
> >One of the (many) lessons I've learned on this newsgroup is that ISOMORPHIC
> >does not mean EQUAL!!
>
> *Have* you learned this? You are the one that keeps saying that
> pseudo(n-p)forms "are just the Hodge duals of" p-forms.
> Prima facie, that seems to be claiming equality between them
> (I generously interpreted it to mean mere equivalence).
Well, I learned that the two ideas are not the same, but obviously I
haven't gotten it into my blood yet. Sorry :| I am trying though.
> So then I find myself in the position of having to explain that
> although the space of the one may be isomorphic to the space of the other,
> they are nevertheless not equivalent -- at least not in general.
> My saying that they were at least equivalent in the presence of a metric
> was a partial *concession* to you.
I understand. You are absolutely right. I am trying to learn to be
rigorous in the things I say. Believe me, I am a lot better now than I was
two years ago :) It is not anyone's responsibility to do so, but it would
be really great if people wouldn't make concessions, because then I just
propagate these imprecise notions. Then again, being so precise can
bog down the conversation I suppose. Thanks for taking the time to
explain.
> Thus, it's not legitimate to consider V and V* equivalent.
I understand this.
> So now it *is* legitimate to consider V and V* equivalent,
> since you're starting from the structure (V,g)
> and not merely the vector space V alone.
Ok, so you say two things are equivalent when there is a natural
isomorphism between them.
> What exactly is "natural", and what is "arbitrary"?
> These notions can be made precise with category theory,
> using the concept of isomorphism between functors
> (aka "natural isomorphism" or "natural equivalence").
> It's good form to consider unequal things equivalent
> if a natural equivalence exists, and bad form otherwise.
> See Week 78 of TWF for more information.
Gotcha.
> And what is that volume? You need structure
> on the manifold (such as a metric) to tell me!
> And you also need to orient that n-chain to make vol an n-form --
> else you only determine it up to sign and it's twisted.
Ok. I might be wrong, but I was thinking that you could just START by
defining vol on a manifold (it's just an n-form!), just like you can
start by defining g on a manifold and get vol from g (and an orientation).
I was suggesting that instead of starting with g (and an orientation),
you just start with vol. I think this is legitimate. My "intuition" has
been taking a beating lately, so I might even be wrong about this.
Basically, I don't even see the need to express vol in terms of g or E. If
I give an n-form and integrate it, I get a number. This number doesn't
depend on the coordinates used, and I can just call this number the
"volume." In fact, this defines what volume means.
> >I'm asking (among other things), "Why
> >not make vol a fundamental variable?"
>
> Well, vol can be defined as the Hodge dual of the 0form 1.
> And the Hodge dual is already determined by E,
> in a horrendously complicated way as John mentions.
That would suggest not using the Hodge to define vol from E and 1 :)
> Perhaps there is a formulation that makes vol fundamental
> and then derives the correct relationship between E and vol
> as one of the equations of motion.
Yeah, this is more like what I am thinking about. It seems like E can be
deduced from vol in a fairly simple manner (based on a perhaps flawed
intuition). Integrating E over a surface gives the area of the surface.
Sounds a lot like some kind of restriction of vol from n-d to 2-d.
> But even if this can be done (and perhaps it can!),
> still that's no argument against the other formulations.
Right, but it just might simplify things. Who knows? Probably not though,
but these are the kinds of questions I'm always asking. I tend to question
everything that everyone is doing (if you hadn't noticed). Sometimes it
doesn't make me too popular :|
> >By the way, I'd still like to know if "Densitized Pseudo Twisted Forms" are
> >really necessary :)
>
> And yes, they *are* necessary if that's what you have!
> Twisted forms, twisted over whatever line bundle,
> are *isomorphic* to ordinary forms
> (more precisely: the spaces of them are isomorphic),
> but it's bad form to consider them equivalent
> unless the structure of the manifold defines
> a *natural* isomorphism between them.
> And sometimes it just doesn't!
If you formulate a physical theory in terms of twisted/densitized forms,
then you have twisted/densitized forms by construction :) But I'm
wondering if there was any physical reasoning why you HAVE to formulate
physical theories with these objects, or if there was some "equivalent"
formulation that didn't have them. Prof. Baez seems to be wanting to
force me into thinking so. Maybe I'll ask more questions over there in
response to what he said.
Thank you very much for your input. I can be thick skulled sometimes, but
once an idea manages to sink itself in my brain, it usually settles
nicely. I'll just need to keep hitting it until I understand everything.
Eric
On Tue, 30 Oct 2001, John Baez wrote:
> Eric Forgy <fo...@uiuc.edu> wrote:
>
> >Although I am not a researcher in QG, I have read enough papers to
> >appreciate the fact that the metric should NOT be used as a fundamental
> >variable. So, I actually wouldn't reply "There *is* a metric!" because I
> >think the metric is over rated :) I'd say, "There *is* a volume form!"
>
> Well, like I already said, if the E field is zero, this volume form
> will be ZERO. This means you can't use it to define a Hodge star
> operator. And that means that you must avoid the Hodge star operator
> when doing canonical quantum gravity with A and E.
Ok. That is fine. So avoid the Hodge :) I am not 100% sure of that though.
If E is 0, vol is 0, and that means Hodge is 0 (I think). This just means
you can't define *^{-1}, that's all. What I am trying to figure out is
whether A and E are the variables to use versus, say for example A and
vol. E can be zero, fine. vol can be zero, also fine. I think that
implies Hodge is zero. I don't see the problem.
> Is this enough to convince you of the error of your ways yet, or do
> I have to get tough? :-)
hehe :D
Maybe you need to get tough :)
> >You have a connection A and its corresponding curvature F. Then you also
> >have this densitized (n-2) form E with action
> >
> >S = int_M tr(F/\E)
> >
> >To me, it seems like E = *F
>
> Well, that's just not true! You just wrote down the action for
> EF theory. If you work out the equations of motion you don't
> get any equation expressing E in terms of F; you get
>
> F = 0
>
> and
>
> d_A E = 0
Yeah, sorry. I didn't really mean EF theory. I meant EF-theory with the
constraints that makes EF-theory a lot like a Yang-Mills theory, which I
assumed would make E look a lot like *F. I was sloppy, sorry about that.
> Moreover, you certainly don't get any equation involving the
> Hodge star operator, because you can't even DEFINE a Hodge star
> operator in this theory - there ain't no metric, there's just a
> thing like a "volume form" which however is sometimes zero, namely
> tr(E ^ E).
kewl! That is neat :) This is precisely what I was "trying" to talk about
:) Again, as you say, vol can be zero, which is fine.
> Similar remarks hold for various Ashtekar-like formulations of
> general relativity.
>
> >S = int_M tr(F/\*F) = int_M (F,F) vol
>
> This is some completely *different* action: it's the action
> for Yang-Mills theory. We can write down this action only in
> situations where we have a metric - which is needed not only to
> define "vol" but more importantly, the inner product ( , ).
Yeah, after I wrote this I did think about ( , ). I decided to go ahead
and send it because I was thinking that perhaps variation of ( , ) would
be zero somehow as a result of metric compatibility or something. In fact,
setting the variation of ( , ) to zero seems like it might somehow DEFINE
metric compatibility. That is a stretch though. I'm probably wrong.
> >So my question is, why do you choose A and E as your fundamental variables?
>
> 'Cause it works nice.
Alright, the fact that vol = tr(E/\E) is really neat :) I wonder if there
is an equally nice way to obtain E, or something like it from vol? I guess
E is not really a 2d volume form (as I claimed earlier) because it is Lie
algebra valued. But it is alot like one. This is probably just being silly
now, but perhaps it would be a little interesting to consider a Lie
algebra valued volume form :)
vol = E/\E (take the Lie bracket of Lie algebra part and wedge the rest)
Then it might be possible to obtain E from this Lie algebra valued n-form
by some kind of projection or something. For what it's worth, these crazy
ideas of mine do actually help me understand things once I get them
straightened out.
> >And more generally, why do we ever really need to care about densitized,
> >pseudo, twisted (DPT) forms in the first place?
>
> Because, dagnab it, we can only identify them with ordinary differential
> forms in contexts where we have an orientation and a nondegenerate metric!
> In Ashtekar gravity and BF theory, we don't have the latter!
Well, Ashtekar gravity and BF theory use A and E as fundamental variables
so that densitized forms are built in from the beginning. So, when you
start with densitized forms, I see why you need densitized forms. I just
don't understand why you start with them. I'm wondering if there is
another way to do things.
> Someday you'll learn all the finite-dimensional irreps of GL(n) and
> you will see that not just tensors but densitized tensors and
> densitized pseudotensors play a natural role in physics.
Wow. That certainly sounds interesting to me. I cannot imagine how
finite-dimensional irreps of GL(n) relate to this stuff. Would you mind
recommending a reference that specifically relates irreps of GL(n) to
densitized tensors and densitized pseudotensors? If I read that (and buy
it), that might finally convince me :)
> We will drag you kicking and screaming to this realization if need be.
> :-)
hehe :) That would be a beating I would welcome :)
Thanks a lot for your response,
Eric
>>> By the way, I'd still like to know if "Densitized Pseudo Twisted Forms"
>>> are really necessary :)
>>
>> And yes, they *are* necessary if that's what you have!
>> Twisted forms, twisted over whatever line bundle,
>> are *isomorphic* to ordinary forms
>> (more precisely: the spaces of them are isomorphic),
>> but it's bad form to consider them equivalent
>> unless the structure of the manifold defines
>> a *natural* isomorphism between them.
>> And sometimes it just doesn't!
>
> If you formulate a physical theory in terms of twisted/densitized forms,
> then you have twisted/densitized forms by construction :) But I'm
> wondering if there was any physical reasoning why you HAVE to formulate
> physical theories with these objects, or if there was some "equivalent"
> formulation that didn't have them. Prof. Baez seems to be wanting to
> force me into thinking so. Maybe I'll ask more questions over there in
> response to what he said.
As I believe someone has already observed earlier in this thread, forms
work well for the things they work well for, such as Maxwell's equations.
However, there are some things that they do not _AT ALL_ work well for,
such as expressing the energy-momentum tensor of a scalar field.
In fact, the expression for the energy-momentum tensor of a scalar field
in terms of differential forms is so hideously ugly and difficult to work
with that I suspect almost anyone who looks at it will run away screaming! :-/
So differential forms are simply the wrong tool for handling the
gravitational field produced by a scalar field; nevertheless, scalars
are part of the standard model, and one has little choice but to deal
with them, unless and until the Higgs mechanism is somehow done away with.
(One should always use the right tool for the right job; for example,
one should never use a big wrench to pound in a nail, when a little wrench
will work just as well... ;-I)
[..]
>
> Untill somebody puts forward new -physical- theories,
> with new experimental consequences,
I suggest a testable consequence of a saying charge is
invariant rather than epsilon, in the thread
"Thought experiment with vacuum polarisability".
Gerard
>Well, I learned that the two ideas are not the same, but obviously I
>haven't gotten it into my blood yet. Sorry :| I am trying though.
I forgive you -_^.
>Ok, so you say two things are equivalent when there is a natural
>isomorphism between them.
Hey, you guessed it before I said it!
>Ok. I might be wrong, but I was thinking that you could just START by
>defining vol on a manifold (it's just an n-form!), just like you can
>start by defining g on a manifold and get vol from g (and an orientation).
OK, sure, you could do this,
and then presumably you'd want to make it a dynamical variable.
Then any use of Hodge * would have to be expressed
in terms of vol explicitly -- which is actually pretty simple.
But note that vol (if fixed) doesn't define an equivalence
between p-forms and (n-p)forms but between p-forms and (n-p)*vectors*.
So this is the only Hodge * operator that you get from just vol.
>Yeah, this is more like what I am thinking about. It seems like E can be
>deduced from vol in a fairly simple manner (based on a perhaps flawed
>intuition). Integrating E over a surface gives the area of the surface.
>Sounds a lot like some kind of restriction of vol from n-d to 2-d.
Volume is not sufficient to define area, unfortunately.
Suppose that the world were stretched to double size
along the N/S axis and squashed to half size on the E/W axis.
Then no volumes would change, but many areas would!
>>And yes, they *are* necessary if that's what you have!
>If you formulate a physical theory in terms of twisted/densitized forms,
>then you have twisted/densitized forms by construction :) But I'm
>wondering if there was any physical reasoning why you HAVE to formulate
>physical theories with these objects, or if there was some "equivalent"
>formulation that didn't have them. Prof. Baez seems to be wanting to
>force me into thinking so. Maybe I'll ask more questions over there in
>response to what he said.
Well, Maxwell's equations seem to have them,
if you look at them in a certain light.
Fundamentally, of course, they don't.
>Thank you very much for your input. I can be thick skulled sometimes, but
>once an idea manages to sink itself in my brain, it usually settles
>nicely. I'll just need to keep hitting it until I understand everything.
OK, we'll see how it goes ^_^.
-- Toby
to...@math.ucr.edu
> "J. J. Lodder" wrote:
> > Until somebody puts forward new -physical- theories,
> > with new experimental consequences [....]
> I suggest a testable consequence of saying charge is
> invariant rather than epsilon, in the thread
> "Thought experiment with vacuum polarisability".
Just saying something cannot lead to testable consequences.
You have not said anything until you formulate a physical theory,
one that says what is going to vary with what.
Jan
>On Tue, 30 Oct 2001, John Baez wrote:
>> Well, like I already said, if the E field is zero, this volume form
>> will be ZERO. This means you can't use it to define a Hodge star
>> operator.
>Ok. That is fine. So avoid the Hodge :) I am not 100% sure of that though.
>If E is 0, vol is 0, and that means Hodge is 0 (I think). This just means
>you can't define *^{-1}, that's all.
Okay, I guess you're right.
More precisely: there are two common ways to define the Hodge
star operator. One is to insist that
<a,b> vol = a ^ *b
and the other is to insist that
<a,*b> vol = a ^ b
for all differential forms a,b. These two definitions give the
same thing when ** = 1, since then * is its own inverse. In
situations where ** = -1 these two definitions differ by a sign -
one of those nasty signs that infests the subject of differential
forms!
Anyway, now we're talking about situations where the metric is
degenerate. In these situations vol = 0. So, the first
definition works when we take * = 0, but then we can't define the
inverse of the Hodge star operator. The second definition doesn't
work at all - that's the one I had in mind.
So, in these situations the first definition is a bit better than
the second, and then everything you say above is true. But alas,
even the first definition doesn't allow us to invert the Hodge
star operator. And this causes lots of problems.
>What I am trying to figure out is
>whether A and E are the variables to use versus, say for example A and
>vol.
There are dozens of different formulations of general relativity.
It sounds like you want to invent one where a connection A and a
volume form vol are the basic variables. You are welcome to try.
However, a pretty reasonable requirement is that you be able to
write down the Lagrangian for general relativity using these variables.
I don't see how to do it using just A and vol!
Let me review a few of the most popular formulations:
1) The Einstein-Hilbert formulation uses the metric g as the
basic field. From this you can write down formulas for the volume
form vol and the Ricci scalar R, and the Lagrangian for general
relativity is
R vol.
2) The Palatini formulation uses a tetrad field e and an so(3,1)
connection A as the basic fields. If F is the curvature of A,
in 4 dimensions the Lagrangian for general relativity in this
formulation is
tr(e ^ e ^ F).
Here the "tetrad field" is (locally speaking) an R^4-valued 1-form,
and I'm using "tr" as shorthand for some process that turns the
stuff in parentheses into an ordinary 4-form.
Here I'm talking about 4 dimensions; in different dimensions we'd
change the gauge group and Lagrangian in an obvious way. E.g.
in 3 dimensions A would be an so(2,1) connection and the Lagrangian
would be
tr(e ^ F).
3) The EF formulation: like the Palatini, but we work with
E = e ^ e, an so(3,1)-valued 2-form. We need to introduce
a Lagrange multiplier field such that varying the action with
respect to this field gives an equation of motion guaranteeing
that E is actually of the form e ^ e for some tetrad field e.
So the basic fields are A, E, and this Lagrange multiplier field.
I'm too lazy to write down the actual Lagrangian.
4) The Ashtekar formulation. This is like the Palatini,
but instead of working with the so(3,1) connection we work with
just its "left-handed part", aka "self-dual part".
5) The CDJ formulation. Like the Ashtekar, but we work with E
instead of e.
Anyway, these are a few, and there are lots more. But I don't know
any where the only fields are a connection and a volume form.
If you want one like that, you'll have to invent it yourself!
Now, I'm too tired to answer your other questions, so at this
point I'll turn you over to the Wizard.
Hey, Wiz - come here and deal with this Eric Forgy dude!
[John Baez exits stage right. Growling under his breath, the
Wizard enters.]
[To be continued in another post.]
John Baez wrote:
>> Someday you'll learn all the finite-dimensional irreps of GL(n) and
>> you will see that not just tensors but densitized tensors and
>> densitized pseudotensors play a natural role in physics.
Eric Alan Forgy wrote:
>Wow. That certainly sounds interesting to me. I cannot imagine how
>finite-dimensional irreps of GL(n) relate to this stuff.
The Wizard replies:
NO???
[He throws a thunderbolt at the ceiling in frustrated rage, wondering
how the educational system has sunk to such a sad level.]
This is the WHOLE POINT of tensors and these fancy "densitized"
and "pseudo" tensors! All these gadgets are supposed to transform
in some reasonable way under coordinate transformations. We have
to start by seeing how they transform under *linear* coordinate
transformations, before we get into fancier stuff. Right???
But what's a linear coordinate transformation? It's an invertible
n x n matrix with real entries. These matrices form the group GL(n)!
So: to completely classify various flavors of tensors and their funky
"densitized" and "pseudo" versions, we need to list all the ways
something can transform under GL(n).
More precisely, we need to understand all the REPRESENTATIONS of GL(n)!
Or more precisely still, the FINITE-DIMENSIONAL representations -
assuming you aren't yet interested in tensor-like gadgets with
*infinitely* many components. Of course, it's enough to understand
the IRREDUCIBLE finite-dimensional representations, since all the
fancier representations are direct sums of these.
So, you need to consult your local math expert and have them explain
the finite-dimensional irreducible representations of GL(n). Each
one of these gives a flavor of "tensor-like gadget". You will find
the classification involves Young diagrams and little bit of extra
fluff.
A Young diagram is just a bunch of boxes arranged in a pattern
like this:
********
*****
**
*
*
*
The number of boxes in your Young diagram specifies the RANK
of your tensor - that is, the number of indices, if you like indices.
The pattern in which they're arranged tells you the "SYMMETRY" of
the tensor - e.g., completely symmetric with respect to permuting
indices, or completely antisymmetric, or some fancy mixture. Finally,
the little bit of extra fluff tells you the DENSITY WEIGHT of your
tensor, plus whether or not it is PSEUDO. The density weight tells
you how your gadget transforms under dilations, while the pseudoness
tells you whether or not it picks up an extra minus sign under reflections,
besides what it would usually get.
Your obsession with differential forms means that you only love
representations of GL(n) where the Young diagram consists of n
boxes arranged in a vertical column:
*
*
*
*
In other words, you're extremely fond of tensors that are rank n
and completely antisymmetric with respect to permuting indices.
Also, you scorn them unless the extra fluff is trivial, meaning
that the density weight is zero and there's no pseudo-ness.
It's all very well and good that you love this particular sort
of representation of GL(n) so much - they're great! But, it's
terribly limiting to say that you'll never ever talk about any
other sort of representation of GL(n). Heck, even the metric
tensor is not of this form - it's a completely *symmetric* tensor
of rank 2, so it goes along with a Young diagram like this:
**
>Would you mind
>recommending a reference that specifically relates irreps of GL(n) to
>densitized tensors and densitized pseudotensors?
[The Wizard scratches his head and thinks a minute, then becomes
impatient.]
Hmm, err... doesn't everyone ALREADY KNOW THIS STUFF? I can't
remember where I learned it! It was so long ago. Maybe I was
born knowing it. Or maybe as a child, I once wanted to understand
what all this stuff about "pseudovectors" and "densities" was about,
so I asked my local wizard, and he gave me a lecture just like this,
and then I studied Young diagrams and it all fell into place.
[He scratches his head some more and stares out the window.]
Hmm... a "reference", eh? You want a "reference". Ah, if only
life were so simple: you just ask for a "reference", and some
wise old wizard tells you where to look... wouldn't that be easy!
[Grumbling, the Wiz opens a cabinet and pulls out a tome.
Blowing the dust off the cover, he shows it to Eric.]
How about this:
Morton Hamermesh,
Group Theory and Its Application to Physical Problems,
Addison-Wesley, 1962.
Let's see... Chapter 10: "Linear groups in n-dimensional space;
irreducible tensors". Sounds promising. Let's see how it starts...
yes, this has the stuff you want. First he works out the finite-
dimensional irreducible representations of GL(n), and then he moves
on to fancier groups like O(n) and Sp(n) - don't worry about those
just yet. The only trouble is, he doesn't quite come out and talk
about concepts like "density weight" and "pseudo-ness" - that's
buried in the stuff where he goes from reps of SL(n) to reps of GL(n).
Surely there must be SOMEONE who has explained this more clearly,
but I don't know where... like I said, this is just one of those
things everyone knows... now GET OUT and let me get back to work!
[He tosses the book to Forgy, pushes him out the door, and slams it.]
First, I'd like to thank you, the Wizard, and Marc Nardmann (for his
response to my question about minimalist wormholes). On rare occasions,
when I find a post particularly helpful, I will print it out so I can keep
a hard copy. I printed all three of these responses. A great day! :)
"John Baez" <ba...@galaxy.ucr.edu> wrote:
> eric alan forgy <fo...@students.uiuc.edu> wrote:
>>On Tue, 30 Oct 2001, John Baez wrote:
>>> Well, like I already said, if the E field is zero, this volume form
>>> will be ZERO. This means you can't use it to define a Hodge star
>>> operator.
>>Ok. That is fine. So avoid the Hodge :) I am not 100% sure of that
>>though. If E is 0, vol is 0, and that means Hodge is 0 (I think). This
>>just means you can't define *^{-1}, that's all.
> Okay, I guess you're right.
Miracles do happen :)
> Anyway, now we're talking about situations where the metric is
> degenerate. In these situations vol = 0. So, the first definition
> works when we take * = 0, but then we can't define the inverse of the
> Hodge star operator. The second definition doesn't work at all - that's
> the one I had in mind.
>
> So, in these situations the first definition is a bit better than the
> second, and then everything you say above is true. But alas, even the
> first definition doesn't allow us to invert the Hodge star operator. And
> this causes lots of problems.
It is clear why you cannot invert the Hodge, but why is this a problem? In
any situation I can think of where you'd need to invert the Hodge, it
seems like you'd also have to invert the metric, which can't be done
either in these cases.
[snip of an awesome quick run down of the various formulations of GR]
> 3) The EF formulation: like the Palatini, but we work with E = e ^ e, an
> so(3,1)-valued 2-form.
I guess it is obvious, but I never thought of it. Recently, you said
vol = tr(E/\E)
Now you say E = e/\e, so I guess that means (duh)
vol = tr(e/\e/\e/\e).
I'm not yet fluent with this e business (not e-business :B), so this is
probably a silly question. It is tempting to look at this and ask if this
is a general result, i.e.
In 1d: vol = tr(e)
In 2d: vol = tr(e/\e)
In 3d: vol = tr(e/\e/\e)
In 4d: vol = tr(e/\e/\e/\e)
..
..
..
If so, this is VERY kewl and I should drop every right now and try to
understand this e :) I have two data points to support this. I know it is
true in 2d and now I know it is true in 4d, but I have never seen it in 1d
or 3d. You can probably guess why I think this is so kewl :)
Thanks again,
Eric
PS: I am experimenting with a new newsreader. I hope it sends in plain
text.
[Moderator's note: it worked. - jb]
>It is clear why you cannot invert the Hodge, but why is this a problem?
Well, this thread has gone on long enough that I've forgotten
what we started out talking about, but I vaguely seem to recall
you wanted to use the Hodge star operator to set up an isomorphism
between something and something else (e.g. p-forms and some sort
of twisted (n-p)-forms). If it's not invertible, it won't give you
an isomorphism.
But instead of talking in general terms about whether there's "a
problem", maybe you should make some precise assertions and let me
agree or disagree with them.
>I guess it is obvious, but I never thought of it. Recently, you said
>
>vol = tr(E/\E)
>
>Now you say E = e/\e, so I guess that means (duh)
>
>vol = tr(e/\e/\e/\e).
Right. There might or might not be a factor of 4! floating around
here somewhere, depending on how you define various things.
>It is tempting to look at this and ask if this
>is a general result, i.e.
>
>In 1d: vol = tr(e)
>In 2d: vol = tr(e/\e)
>In 3d: vol = tr(e/\e/\e)
>In 4d: vol = tr(e/\e/\e/\e)
>..
>..
>..
>
>If so, this is VERY kewl and I should drop every right now and try to
>understand this e.
It's true! Again, you might want a factor of n! to make this exactly
right, but that's a minor point. Drop everything and learn about e -
it's called a "vierbein" or "cotetrad field" or "soldering form" or
various other things, depending on who you talk to and what dimension
you're in when you're having the conversation.
hehe :) That is true, the subject has taken a turn or two :)
I was originally claiming that fancy shmancy forms, a.k.a. densitized (D)
or pseudo (P) or twisted (T) or pseudo twisted (PT) forms were not REALLY
necessary to describe physics. When it comes down to basics, all you need
is regular forms. In any physical situation where you do have fancy
shmancy (DPT) forms, then you also have the extra machinery you need
to obtain these fancy shmancy forms from regular forms :) I was looking
for a deep down fundamental physical theory that actually required these
(DPT) forms.
You guys seemed to be shooting back at me with some remarks about EF
theory (or was I shooting?) :) F is a regular form, while E is a fancy
shmancy (D) form. I was saying that E is actually just the Hodge dual of a
regular 2-form. But I guess this fact is a consequence of some constraint
that I don't really quite understand. I began questioning why E is
considered fundamental. In EF theory, the metric is degenerate, so you
cannot invert it. I said that vol could also be degenerate which would
mean that Hodge was degenerate, but I didn't see why this should be a
problem in EF theory. Although it is obvious why it is a problem when
looking for isomorphisms between p and (n-p)-forms :)
You and the Wizard nearly blasted me for not understanding why irreps of
GL(n) were so important :) Please, don't let me be a representative of how
low the education system has gone. I am not your typical student (thank
god!) :) I definitely learn things in an unorthodox manner.
>>It is tempting to look at this and ask if this is a general result, i.e.
>> In 1d: vol = tr(e)
>>In 2d: vol = tr(e/\e)
>>In 3d: vol = tr(e/\e/\e)
>>In 4d: vol = tr(e/\e/\e/\e)
>>..
>>If so, this is VERY kewl and I should drop every right now and try to
>>understand this e.
>
> It's true! Again, you might want a factor of n! to make this exactly
> right, but that's a minor point. Drop everything and learn about e -
> it's called a "vierbein" or "cotetrad field" or "soldering form" or
> various other things, depending on who you talk to and what dimension
> you're in when you're having the conversation.
Very kewl :) Yes, I know what it is called, but my attention starts waning
right about that point in my reading :) I guess I'll have to force myself
to get through it :) If someone had written that sequence I wrote above
about vol in various dimensions, I promise I would have read about it more
carefully :)
I don't expect many people to understand me, but I am spending a PAINFULLY
long time to understand the very basic geometrical concepts as clearly as
possible. It is a personal philosophy of mine that mathematics should not
be learned as a tool to study physics. I look at mathematics as speaking
directly to nature. I suspect quite a few people would say, "Yeah, yeah,
tangent vectors... yeah, yeah, manifolds. Been there, done that. I'm
doing QUANTUM GRAVITY (or M-THEORY... take your pick) for goodness sakes!
I don't need to be bothered by such trivialities!" How many people really
stop to ask, "What are these objects saying about nature?" If they are not
saying something about nature, what purpose could they possibly have in
physics? I personally think that a lot of differential geometry ISN'T
speaking to nature. Therefore, I want to rebuild a machinery that does
speak to nature. It's probably an unpopular point of view, but I want to
know, "If E and F are really fundamental, what ARE THEY?!" :)
Eric
>I personally think that a lot of differential geometry ISN'T
> speaking to nature. Therefore, I want to rebuild a machinery that does
> speak to nature.
Your intuition here is good, but you might wish to know that quite a
few theorists have already done work in this direction: it's called
noncommutative differential geometry. People such as Connes, Rieffel,
Madore etc. have an operator theory based approach, and people such as
Majid, Durdevich etc. have a quantum groups (Hopf algebras) based
approach (which is sometimes called quantum Riemann geometry -
although this could differ in ways from quantum Riemann geometry which
arises in LQG approaches). You might want to look at Connes' textbook
"Noncommutative geometry" or Majid's textbook "Foundations of quantum
group theory". I don't recommend buying these books until you look at
them first - try (inter)library loan or a visit to an academic
bookstore. There are probably also introductory papers on LANL.
I can't say anything about Majid's book, but Connes' book is
largely useless as a textbook. Take a look at it to get some of the
flavor and motivation for the subject, but if you actually want to learn
some of the technicalities, I would suggest looking at
physics/9709045, Varilly, An Introduction to Noncommutative Geometry.
Varilly also has a textbook which looked pretty nice when I saw it in the
bookstore: Very similar to these notes, but more thorough and more
leisurely. You might also want to check out Douglas and Nekrasov's paper
"Noncommutative Field Theory" (hep-th/0106048) for motivation and
applications.
--A.J.
>I was originally claiming that fancy shmancy forms, a.k.a. densitized (D)
>or pseudo (P) or twisted (T) or pseudo twisted (PT) forms were not REALLY
>necessary to describe physics.
Okay. Let me try the simplest example where we need them:
electromagnetism on an unoriented spacetime. If you want to make my
life difficult, you can argue that unoriented spacetimes are
"unphysical" - e.g., we haven't actually sent any right-handed
astronauts on long space journeys and had them come back left-handed,
all their dextrose turned to levulose. But if you do this, I will be
disappointed, because I think you know that nothing about Maxwell's
equations really involves a "handedness" - all that "right-hand rule"
crap is just the result of trying to use vectors in situations where
vectors aren't really appropriate. As a result, it should be perfectly
possible to formulate Maxwell's equations on a spacetime that's not
equipped with an orientation. And it *is* - but not using differential
forms. We need fancy-schmancy forms.
You can probably guess the culprit: it's the Hodge star operator!
We usually think of the Hodge star operator as a map from p-forms
to (n-p)-forms, where n is the dimension of spacetime, but this only
works when spacetime is equipped with a metric and orientation.
If we have a metric but not an orientation, we can still define the Hodge
star operator, but only as a map from p-forms to pseudo-(n-p)-forms.
If we do this, ALL OF ELECTROMAGNETISM WORKS JUST AS WELL AS IT EVER
DID.
This strongly suggests that this is secretly what we "should
have" been doing all along. We never really needed an orientation
on spacetime; we were only using it as a crutch, because we were
too lazy to understand pseudo-forms.
Of course I put "should have" in quotes, because as long as we work
on an oriented spacetime, it DOESN'T REALLY MATTER whether we use
forms or pseudo-forms: there's a canonical isomorphism between the
two, defined using the orientation. So, don't get me wrong: I'm not
trying to persuade people to stop using forms and take up pseudo-forms,
because in most applications we *are* working on an oriented spacetime,
and then pseudo-forms are just an extra bother. I'm just saying that
the usual treatment of electromagnetism makes use of a structure on
spacetime which is not logically necessary - the orientation - and
we can eliminate the need for this if we use pseudo-forms. The
only people who should care about this are people who want to study
physics on unoriented spacetimes *and* people who want to understand
the foundations of geometry as thoroughly and carefully as possible.
I count you among the latter.
By the way, there are other theories, like the theory of the weak
force, which really *do* make use of the orientation on spacetime.
The weak force really *does* care about handedness. So it's nice
to see how you can formulate electromagnetism without an orientation,
but not the electroweak theory.
>You and the Wizard nearly blasted me for not understanding why irreps of
>GL(n) were so important :) Please, don't let me be a representative of how
>low the education system has gone. I am not your typical student (thank
>god!) :) I definitely learn things in an unorthodox manner.
I think it was the Wizard, not I, who complained about you as an
example of how low the educational system has sunk. This is just
his usual irascibility, and you shouldn't take it too seriously.
In fact, rather few physicists have taken the trouble to understand
the complete classification of tensors and their kin using the
representation theory of GL(n). But I think *you* would like to!
>I don't expect many people to understand me, but I am spending a PAINFULLY
>long time to understand the very basic geometrical concepts as clearly as
>possible. It is a personal philosophy of mine that mathematics should not
>be learned as a tool to study physics. I look at mathematics as speaking
>directly to nature.
This is my philosophy too. This is why I enjoy talking to you about
this stuff. Most people rush through the geometry and miss out on a
lot of fascinating subleties, LIKE THE IMPORTANCE OF DENSITIZED AND
"PSEUDO" TENSORS. Ahem. You see, it's really incredibly cool how:
representations of GL(n) correspond to different kinds of tensor-like
gadgets in situations where spacetime is just a smooth n-manifold with
no extra structure;
representations of GL_0(n) correspond to different kinds of tensor-like
gadges in situations where spacetime is a smooth n-manifold equipped with
an orientation;
representations of SL(n) correspond to different kinds of tensor-like
gadgets in situations where spacetime is a smooth n-manifold equipped
with an orientation and volume form;
representations of O(n-1,1) correspond to different kinds of tensor-like
gadgets in situations where spacetime is a smooth n-manifold equipped
with a Lorentzian metric;
representations of SO(n-1,1) correspond to different kinds of tensor-like
gadgets in situations where spacetime is a smooth n-manifold equipped
with a Lorentzian metric and orientation;
and so on for various other sorts of groups and structures we like to
put on spacetime. The more structure we put on spacetime, the smaller
the relevant group gets, and the more kinds of tensor-like gadgets
become "the same": two different representations of a big group
can become equivalent when restricted to a smaller group!
This is why, if you're always assuming spacetime is equipped with
an orientation, you don't need to distinguish between p-forms and
pseudo-p-forms: they correspond to different representations of GL(n),
but equivalent representations of GL_0(n) (the subgroup of GL(n)
that preserves the orientation).
And this is also why some even more crass people don't distinguish
between vectors and 1-forms: they're assuming spacetime is equipped
with a metric, and while vectors and 1-forms correspond to different
representations of GL(n), they correspond to equivalent representations
of O(n-1,1) (the subgroup of GL(n) that preserves the metric).
So you see, when you argue that "there's no real need for pseudo-forms -
forms are all we need", you sound to me exactly like the people who
argue that "there's no need for 1-forms - vector fields are all we need".
Yes, they are equivalent as representations of a small group, but they're
not as representations of a bigger group... so your attitude is fine
when spacetime is equipped with lots of structure, but not when it's
equipped with less!
>You and the Wizard nearly blasted me for not understanding why irreps of
>GL(n) were so important :)
And rightly so!
By the way, I found out who figured out this stuff: Hermann Weyl and
his student Alexander Weinstein, right around 1924. I happen to be r
eading a history of this business right now:
Thomas Hawkins, Emergence of the Theory of Lie Groups, Springer,
Berlin, 2000.
First Weyl described how irreps of SL(n) correspond to various
symmetry types of tensor, classified by Young diagram. Then he
realized that GL(n) was in a way more fundamental, since it
includes dilations and reflections. This is where the "densitized"
and "pseudo" tensors come in - the density weight modifies how a
tensor transforms under dilations, while the "pseudoness" throws
in an extra minus sign for reflections. Weyl was the one who
introduced the notion of "tensor density", and in 1921 he wrote:
"By contrasting tensor and tensor-densities, it seems to me that we
have rigorously grasped the difference between *quantity* and *intensity*,
so far as the difference has a physical meaning".
This was in his book "Space, Time, Matter" - which I highly recommend,
by the way. It was in this book that the modern definition of "vector
space" was first introduced!
In his Ph.D. thesis published in 1922, Weinstein classified all the
irreducible representations of GL(n).
In 1925, Weyl wrote:
"In place of the concept of a tensor that of a *tensor density* has
arise; however, in the general sense that with the transition to a
new coordinate system... multiplication by an *arbitrary* power of
the transformation determinant occurs, the exponent being not
necessarily 1 or even integral."
Thank you very much for this post. I almost sensed a little taste of
encouragement. As an electrical engineer studying differential geometry, I
am getting nothing but resistance from almost all of my peers. You have no
idea how far the slightest encouragement goes :) (Just for the record, I
don't really care what "nay-sayers" think. I study it because I think it
is important. Nonetheless, it would be a bonus to actually be appreciated
now and then though. Oh well. Maybe in another life :))
On Wed, 21 Nov 2001, John Baez wrote:
> Eric A. Forgy <fo...@uiuc.edu> wrote:
>
> >I was originally claiming that fancy shmancy forms, a.k.a. densitized (D)
> >or pseudo (P) or twisted (T) or pseudo twisted (PT) forms were not REALLY
> >necessary to describe physics.
>
> Okay. Let me try the simplest example where we need them:
> electromagnetism on an unoriented spacetime.
Ah, good :) Something that I actually know something about :)
> If you want to make my life difficult, you can argue that unoriented
> spacetimes are "unphysical"... [snip] But if you do this, I will be
> disappointed...
Ok. I will resist the temptation :)
[snip]
> ...it should be perfectly
> possible to formulate Maxwell's equations on a spacetime that's not
> equipped with an orientation. And it *is* - but not using differential
> forms. We need fancy-schmancy forms.
Ok. I'm listening.
> You can probably guess the culprit: it's the Hodge star operator!
> We usually think of the Hodge star operator as a map from p-forms
> to (n-p)-forms, where n is the dimension of spacetime, but this only
> works when spacetime is equipped with a metric and orientation.
> If we have a metric but not an orientation, we can still define the Hodge
> star operator, but only as a map from p-forms to pseudo-(n-p)-forms.
> If we do this, ALL OF ELECTROMAGNETISM WORKS JUST AS WELL AS IT EVER
> DID.
I'm not trying to be trouble, but let me turn things around a little bit.
With an inner product:
[a,b] := int_M a/\*b,
then Maxwell's equations may be written as
dF = 0
d*F = J.
In this form, it seems to me like you do in fact need fancy-shmancy forms.
However, I think that perhaps a more natural expression of Maxwell's
equations is in the form
dF = 0
delF = j,
where del is the adjoint of d with respect to the inner product [ , ],
i.e.,
[dA,B] = [A,delB].
I haven't checked this because, until now, I haven't been especially
interested in whether or not the forms I'm working with are pseudo or not
(not to mention the fact that my prelim is Dec 11!), but it seems to me
that dF and delF are both just regular forms. So, unless I'm mistaken, it
seems that you do not need pseudo forms even on an unoriented manifold as
long as Maxwell's equations are written in terms of d and del, rather than
d and d*. Since del involves the application of the Hodge star twice, it
seems like any "pseudo"-ness would cancel out and you just get regular
forms. If delF is a pseudo form, then I take everything I just said back
:)
> This strongly suggests that this is secretly what we "should
> have" been doing all along. We never really needed an orientation
> on spacetime; we were only using it as a crutch, because we were
> too lazy to understand pseudo-forms.
Yeah, or maybe we should have been using del all along :)
> The only people who should care about this are people who want to study
> physics on unoriented spacetimes *and* people who want to understand
> the foundations of geometry as thoroughly and carefully as possible.
> I count you among the latter.
Thanks :) I am definitely trying to REALLY understand these things.
> In fact, rather few physicists have taken the trouble to understand
> the complete classification of tensors and their kin using the
> representation theory of GL(n). But I think *you* would like to!
I would certainly like to understand what this representation theory is
saying about fundamental physics. I still haven't convinced myself that it
is not a smoke screen though.
*begin lighthearted rant*
I think that the gods(devils?) of physics/mathematics occasionally send
their minions down to earth in the form of great scientists whose only
purpose is to throw us off the trail. Most notable of such gremlins would
of course be Bohr :)
*end lighthearted rant*
> This is why, if you're always assuming spacetime is equipped with
> an orientation, you don't need to distinguish between p-forms and
> pseudo-p-forms: they correspond to different representations of GL(n),
> but equivalent representations of GL_0(n) (the subgroup of GL(n)
> that preserves the orientation).
I am almost willing to go along with you on this pseudo stuff if my
alternative presentation of Maxwell's equations above via d and del also
explicitly involves pseudo forms. Otherwise, I am not quite convinced (<-
thick skull).
> So you see, when you argue that "there's no real need for pseudo-forms -
> forms are all we need", you sound to me exactly like the people who
> argue that "there's no need for 1-forms - vector fields are all we need".
Ouch! That is bad. I certainly don't want to come across sounding like
this. This is one of my ongoing battles too. To be fair though, I don't
think it is really quite the same. 1-forms are integrands. No one in
their right mind would argue that a vector field was an integrand (or I
would hope not) :) I mentioned before that I consider integrands to be not
only mathematically nice objects, but also to correspond directly to
things that are measurable. Hence, integrands are natural objects in any
physical theory (at least one that was interested in measuring things).
Thanks again,
Eric
>This was in his book "Space, Time, Matter" - which I highly recommend,
>by the way. It was in this book that the modern definition of "vector
>space" was first introduced!
Actually, Giuseppe Peano apparently introduced the same axioms
in a definition of "linear system" in 1888, but this was ignored.
-- Toby
to...@math.ucr.edu
> However, I think that perhaps a more natural expression of Maxwell's
> equations is in the form
>
> dF = 0
> delF = j,
>
> where del is the adjoint of d with respect to the inner product [ , ],
> i.e.,
>
> [dA,B] = [A,delB].
Just a remark (which you probably know about):
Geometric algebra (aka Clifford algebra) is fond of being even
more concise than this by realizing that these two equations
are really two parts of the *single* equation
(d + del)F = j
of electromagnetism. (Depending on which dimension and which
signature of metric one considers it may be convenient to
equivalently use d-del instead of d+del here.)
At least for flat metrics, d + del (aka Dirac operator on the
exterior bundle) coincides with the "vector derivative" found
in geometric algebra texts. I am no expert on GA, having only
read a few introductory texts, but I am under the impression
that generally only flat metrics are treated here (is that
correct?). If I have understood correctly, even the treatment
of general relativity by means of geometric algebra proceeds
by looking at fields on flat space, thereby modifiying(!) GR
to some extend. I have somewhat lost my initial interest in
geometric algebra when I realized that none of the texts I'd
seen considered curved space. I would highly appreciate any
references that do.
While d + del coincides with the "vector derivative" on flat
space, it is of course equally well defined for arbitrary
metrics in the context of exterior calculus. Also, d + del may
generally be written in terms of Clifford elements, if one
whishes, similar to the familiar expression for the Dirac
operator on a spin bundle S for curved space but with an
additional contribution from elements of the "sign-reversed"
(see below) Clifford algebra (which is due to the exterior
bundle being a "twisted" spin bundle:
S x S*).
I think claims found in geometric algebra texts about its
conceptual superiority over other formalisms are mostly quite
justified, but I do not quite see how it is better than
exterior calculus (which is also claimed, the above reduction
of two equations to one, supposedly not possible in exterior
calculus, being given as one evidence), at least when the
latter is equivalently reformulated in terms of Clifford
algebra elements instead of form creation and annihilation
operators, which amounts just to a change of basis. Actually,
if the reason why geometric algebra texts do not consider
curved space (if that's how it is) is that one needs *two*
Clifford algebras to express the "vector derivative" in terms
of Clifford elements in curved space, then exterior calculus
is really a *generalization* of "geometric calculus", since it
naturally comes with both Clifford representations:
(Gamma+)^i := (e/\)^i + (e->)^i
(Gamma-)^i := (e/\)^i - (e->)^i
with non-vanishing anti-commutators:
{ (e/\)^i, (e->)^j } = g^ij
{ (Gamma+)^i, (Gamma+)^j } = g^ij
{ (Gamma-)^i, (Gamma-)^j } = -g^ij .
Regards,
Urs Schreiber
>Geometric algebra (aka Clifford algebra) is fond of being even
>more concise than this by realizing that these two equations
>are really two parts of the *single* equation (d + del)F = j
>of electromagnetism.
That's because Clifford algebra afficionados are fond of
adding together expressions of different rank,
while differential forms afficionados aren't.
But it can still be done, in the exterior algebra,
and the equation becomes (d + *d*)F = j,
where F is a 2form and j is a 1form.
-- Toby
to...@math.ucr.edu
Yes, I think that was my point! :-) But isn't it strange that
(d + del) = (d \pm *d*) is a perfect generalization of the
"vector derivative" to curved geometry, but apparently
ignored by the above mentioned aficionados of Clifford
algebra? I find that rewriting objects of "exterior calculus"
(and I use this to mean looking at forms of, in general, differing
rank) in terms of the Clifford algebra that comes with the "exterior
bundle" can give valuable insight. I had wanted to look at how
this is (in different notation, but equivalently) done in texts
on "geometric algebra", but, to my disappointment, the texts
I have seen do not treat curved geometry at all!
concerning GA (Geometric Algebra):
> I am no expert on GA, having only
> read a few introductory texts, but I am under the impression
> that generally only flat metrics are treated here (is that
> correct?).
No! Incidentally, today the following paper appeared on the
preprint server
M. Pavsic, How the Geometric Calculus Resolves the ordering
Ambiguity of Quantum Theory in Curved space, gr-qc/0111092
which is about geometric algebra in curved space, or:
about Dirac theory in curved space. Actually, now
that I have seen it, it's pretty obvious how to do it in
the GA sense. :-) A neat theory!
A little inspection reveals, however, that the "vector
derivative"
in curved space *is* equivalent to the exterior
(d + del) = (d +- *d*) when the states of both frameworks are
mapped
onto each other via the "symbol map" that sends Clifford
elements to
the corresponding forms.
To me it seems as if it were of profit to join forces and
exploit the fact that N=2 susy quantum mechanics/exterior
calculus
and geometric calculus is the same thing in two different
frameworks. GC has
many advantages of notation and clarity, though I am still
missing a
few things here that I currently only know how to do in
exterior/sqm
framework. But that might well be my fault.
> "Toby Bartels" <to...@math.ucr.edu> schrieb im Newsbeitrag
> news:9tv52l$sq9$1...@glue.ucr.edu...
>> Urs Schreiber wrote in part:
>>>Geometric algebra (aka Clifford algebra) is fond of being even
>>>more concise than this by realizing that these two equations
>>>are really two parts of the *single* equation (d + del)F = j
>>>of electromagnetism.
>> That's because Clifford algebra afficionados are fond of
>> adding together expressions of different rank,
>> while differential forms afficionados aren't.
>> But it can still be done, in the exterior algebra,
>> and the equation becomes (d + *d*)F = j,
>> where F is a 2form and j is a 1form.
> Yes, I think that was my point! :-) But isn't it strange that
> (d + del) = (d \pm *d*) is a perfect generalization of the
> "vector derivative" to curved geometry, but apparently
> ignored by the above mentioned aficionados of Clifford
> algebra?
Have you not read yet Chapter 4 of ``Clifford Algebra to Geometric
Calculus,'' by David Hestenes and Garret Sobczyk ??? Hestenes and
Sobczyk's ``cocurl'' (eqn. 4.3.5b) is equivalent to the exterior
differential `d', their ``codivergence'' (eqn. 4.3.5c) is equivalent
to the exterior co-differential `\delta', and their ``coderivative''
(eqn. 4.3.5a) is equivalent to (d + \delta). (Note, however, that
there are no annoying signs depending on dimensionality and signature
in their formalism, as there are in exterior calculus... :-)
> I find that rewriting objects of "exterior calculus" (and I use this to
> mean looking at forms of, in general, differing rank) in terms of the
> Clifford algebra that comes with the "exterior bundle" can give valuable
> insight. I had wanted to look at how this is (in different notation, but
> equivalently) done in texts on "geometric algebra", but, to my
> disappointment, the texts I have seen do not treat curved geometry at
> all!
Derivatives wrt curved manifolds are treated in Chapter 5 of ``Clifford
Algebra to Geometric Calculus,'' by David Hestenes and Garret Sobczyk.
> Have you not read yet Chapter 4 of ``Clifford Algebra to Geometric
> Calculus,'' by David Hestenes and Garret Sobczyk ???
Oops, no I have not read this book. I only had a look at a few texts
I found on the web.
Thanks!
> .................................................(Note, however, that
> there are no annoying signs depending on dimensionality and signature
> in their formalism, as there are in exterior calculus... :-)
These sign changes are critically important and should not be hidden!
Spaces with odd and even dimensions are fundamentally different w.r.t.
orientability. Hiding the signs is the exact same thing as introducing i
into bilinear forms to make them look Euclidean. It obscures the actual
origin of this i, which is the unit pseudoscalar - so it's not even true
to the spirit of Clifford algebras to obscure things like this!
As long as one doens't actually have to solve any equations you can
indulge in symbol chopping like this. Once there's a differential
equation to solve, you have to know how signs are being book-kept.
I love Clifford algebras, but Hestenes is off his chair thinking that
his notation is some kind of key to the universe. It's confusing and
derivative (so to speak). Differential forms are far more natural in
this case. Sorry to get huffy, but this is really one of my pet peeves.
As an illustration of the confusion introduced by Hestenes' notation,
see his weird treatment of the Dirac equation.
-ross
> "Gordon D. Pusch" wrote:
> > .....................................(Note, however, that
> > there are no annoying signs depending on dimensionality and signature
> > in their formalism, as there are in exterior calculus... :-)
> These sign changes are critically important and should not be hidden!
> Spaces with odd and even dimensions are fundamentally different w.r.t.
> orientability. Hiding the signs is the exact same thing as introducing i
> into bilinear forms to make them look Euclidean. It obscures the actual
> origin of this i, which is the unit pseudoscalar - so it's not even true
> to the spirit of Clifford algebras to obscure things like this!
Many annoying signs in exterior calculus are due to the fact that the
square of the Hodge star depends not only on the dimension and signature
but also on the rank of forms:
(*)^2 = (-1)^( N (D - N) + s)
(where D is the dimension of the manifold, s the number of negative
eigenvalues of the metric and N is the "number operator" that has as
eigenvalues the rank of forms on forms of definite rank).
This can be significantly improved by using a modification of the
*-operator, instead. The choice of sign of the Hodge-* is the way it
is in order to give the correct inner product
a /\ * b = (a_mn..p b^mn..p) vol .
But a more natural duality operator for most other purposes is the
Clifford pseudo-scalar
I = G^1 G^2 ... G^D
where
G^i = (e/\)^i + (e->)^i
are Clifford elements constructed from the operators of exterior
(e/\) and inetrior (e->) multiplication with respect to a
(pseudo-)orthonormal frame e, satisfying
{(e/\)^i, (e->)^j} = \eta^ij
A straightforward calculation gives the relation between "I" and "*":
I = * (-1)^(N(N-1)/2) .
The square of "I" is:
(I)^2 = (-1)^(D(D-1)/2 + s)
which does not depend on the form rank any more.
An idempotent duality operator is given by modifying "I" by a phase:
c = (i)^(D(D-1)/2+s) I .
Using this operator "c" the sign of the coderivative del becomes much
easier to memorize:
del = (-1)^D c d c
which can be checked by applying the above definitions to the usual
expression of del in terms of the Hodge operator:
del = * d * (-1)^(D(N-1)+1+s) .
The duality (or should a say "chirality"? :-) operator "c" proves to
be advantageous in many other places in exterior calculus, most
notably in analyzing exterior calculus on lorentzian manifolds:
Another important feature besides its idempotency is its behaviour
under taking the adjoint:
c^\dag = (-1)^s c .
So "c" is hermitian on euclidean and anti-hermitan on lorentzian
manifolds.
An immediate consequence of this fact is that for lorentzian metrics
sections of the exterior bundle that live in the same eigenspace of c
have vanishing inner Hodge product.
> As long as one doens't actually have to solve any equations you can
> indulge in symbol chopping like this. Once there's a differential
> equation to solve, you have to know how signs are being book-kept.
>
> I love Clifford algebras, but Hestenes is off his chair thinking that
> his notation is some kind of key to the universe. It's confusing and
> derivative (so to speak). Differential forms are far more natural in
> this case. Sorry to get huffy, but this is really one of my pet peeves.
I consider the above construction of the chirality operator "c" an
example of how the Clifford structure of the exterior bundle can
improve or at least faciliate one's understanding of the latter.
After all, the exterior bundle is a twisted spin bundle. N. Mankoc
Borstnik has written a series of papers that analyse the explicit
representations of Clifford structures on Grassmann spaces (like the
space of forms).
The perception, given in Hestenes' book, about the deficiencies of
exterior calculus really only apply to exterior calculus restricted
to forms of definite rank, the way it is often presented. But at
least since Witten's "Susy and Morse theory" it is apparent that not
looking at the "Fock space" of forms really means missing a lot which
is already there (part of which is, of course, mentioned in any book
on spin geometry). In fact, as I have already mentioned in this
thread, exterior calculus is really a superset of "geometric
calculus", in the sense that, roughly, it is N=2 susy whereas the
former is only N=1. This means that in "geometric calculus" there is
only one Dirac operator, whereas in exterior calculus there are two
(mainly due to the fact that exterior calculus comes with *two*
representations of the Clifford algebra). As is shown in the reviews
by J. Froehlich on "supersymmetry and differential geometry" on
commutative spaces an N=1 sqm system (spin bundle) can always be
extended to an N=2 system (exterior bundle) . In contrast, to go from
N=2 to N=4 (complex geometry) is not automatic, it requires an almost
complex structure. So from this point of view geometric calculus is
hydrogen, whereas exterior calculus is helium: it is the next closed
shell. :-) I hasten to emphasize that I am not saying this in a mere
desire to engage in a fight between camps, which would be silly, but
only in a personal desire to put things in perspective. As I have
dramatically demonstrated in this thread, I am still embarrassingly
ignorant of much of what is done in "geometric calculus", but I can
see that it is certainly of profit to emphasize Clifford structures
whenever they appear and that this has perhaps been neglected in
exterior calculus. So I'll try to see what can be learned from
"geometric calculus".
The bottom line is: We live in a Universe that _appears_ to have a
metric tensor and inner products built into it at a very fundamental
level, so straitjacketing oneself by using a mathematical language
that attempts to ``hide the metric tensor under the carpet'' and only
refer to it via awkward circumlocutions involving the Hodge dual,
contraction identities that take arbitrary vector fields as arguments,
``vector-valued forms,'' &c., &c. is simply unnatural. In this respect,
the ``Geometric Calculus'' approach is superior, since it has Grassmann's
inner product built into it from the word go. People who insist on doing
their math in a ``forms only'' straitjacket are welcome to still do so,
but to do _physics_, forms simply aren't enough.
> A point no one seems to have brought up in these threads (except indirectly
> in John Baez's comments about ``Young Tableux'') is that differential forms
> qua differential forms (i.e., nothing but purely antisymmetric tensors)
> are simply too impoverished a language to express physics in.
Any time a thread pops up that mentions differential forms, I am always
interested. And in all the threads I've read, I haven't seen anyone say
that "forms are all you need" (<- Beatles tribute... moment of silence
for a personal hero). If anyone has come close to saying this, it is
probably me and I certainly didn't intend to imply that I thought forms
are all you need for physics. I did submit that regular forms are all you
need as compared to fancy-schamncy forms and the jury is still out on that
as far as I'm concerned (since no one responded to my EM counter :)).
However, I didn't mean to imply that regular + fancy-schmancy forms are
all you need either. I admit, I would be interested in finding out that
forms are really not enough, but I haven't heard any real convincing
arguments yet (not saying convincing arguments don't exist, this is just a
statement of my ignorance) why they aren't. I am not religious about it
either way though.
> You _still_
> need to introduce vectors and contractions, and to smuggle the metric
> and the ``inner product'' in through the back door via the Hodge dual ---
Just curious, but when do we ever "measure" a vector quantity? A vector is
a quantity that is defined at a point. There is no measuring appratus
that can measure pointwise quantities. Any measurement is going to involve
"integrating" over some finite region representing the extremities of some
probe. So, if you wanted to measure something, you would first project it
down to the surface or curve (i.e. pull back via the injection) and
integrate. But then you are really measuring a form. So it was the form
that was really of interest in the first place and not the vector itself.
The only time I can think of (although I admit my experience is limited)
when one might actually be interested in a vector quantity is when you use
it to project out a particular component of a form, e.g. choosing a
reference frame, which is not really a natural operation anyway.
> and even all that is not enough: To express things like the energy-
> momentum tensor, one needs to introduce ``bastard'' constructions like
> ``vector-valued one-forms,'' &c., &c.
Hmm... I don't understand the quotes around "bastard" here :) Do you
really think vector-valued forms are bad things, or is the quote somehow
intending to suggest that vector-valued forms are not really bastards? Or
do you mean bastard in the literal sense of a fatherless object? :)
I happen to think that Yang-Mills theory, BF theory, even GR etc etc are
quite nicely expressed via Lie algebra valued forms. But even these guys
are usually accompanied by some "tr" to convert the Lie algebra valued
form to a regular form when it comes down to doing anything resembling a
measurement, i.e. constructing an action.
> The bottom line is: We live in a Universe that _appears_ to have a
> metric tensor and inner products built into it at a very fundamental
> level, so straitjacketing oneself by using a mathematical language
> that attempts to ``hide the metric tensor under the carpet''
I think that one convincing result from LQG, in my opinion, is that the
metric tensor is not a fundamental quantity, but is rather a composite
quantity determined from the vierbein (or tetrad, whatever that little
"e" is called). The metric tensor is defined only in a kind of continuum
limit, whereas the tetrads seem to carry meaning all the way down to the
most fundamental level even in LQG. Admittedly, it may still be too early
to take cues from LQG, but I happen to like the idea that the metric
tensor is not fundamental. It "feels" right :)
> and only
> refer to it via awkward circumlocutions involving the Hodge dual,
> contraction identities that take arbitrary vector fields as arguments,
> ``vector-valued forms,'' &c., &c. is simply unnatural.
I might be misunderstanding, but this looks a lot like you are saying
vector valued forms are unnatural. Of course it is my opinion, but I
cannot think of anything much more natural than a Lie algebra valued form.
Saying otherwise seems to be saying that Yang-Mills theory, BF theory,
and perhaps even GR, etc etc are unnatural. Maybe they are, I don't know
:)
> In this respect,
> the ``Geometric Calculus'' approach is superior, since it has Grassmann's
> inner product built into it from the word go. People who insist on doing
> their math in a ``forms only'' straitjacket are welcome to still do so,
> but to do _physics_, forms simply aren't enough.
The games the geometric calculus people play can be equally played by
differential forms people. All you need to do is allow the addition of
forms of differing degrees like Urs has been so happy to illustrate :) In
this way, geometric calculus is subsumed by forms in an obvious way, and
not only the other way around as GC people like to suggest (two things
the subsume each other are actually equal, are they not?). I do not mean
to say that I prefer forms over GC, but I do not like the way the GC
people seem to be trying to "hard sale" GC as superior to forms (I
haven't read a single paper that didn't seem to take pride in the fact
that forms are contained as a sub-calculus of GC), when they are really
just different ways to express the same thing and, in my opinion, simply
generalizing forms in an obvious way that allows you to add forms of
different rank seems more natural anyway. In other words, when I read
stuff about GC, I see that you can do the exact same thing with forms in
just as natural a way, so I fail to get excited. I see the two as nearly
equivalent (dual is probably a better word).
I too, was very excited when I first started learning about GC, after
spending some time learning differential forms. However, it was not long
before I rediscoveered the same thing Urs rediscovered and I'm sure LOTS
of people have rediscovered, and that is that simply allowing the addition
of forms of different degree buys you everything that GC can buy you (and
Urs seems to imply it buys you more! (which wouldn't surprise me all that
much)). The idea to add forms of different degree IS pretty neat and I do
owe that to GC.
As a parting thought, how do you perform a measurement in GC? You'd need
to construct an integral somehow, but once you've done that, you have
converted everything to a form anyway.
I suggest that GC is nothing but exterior calculus, but in the dual
space. In other words, GC deals with p-vectors in exactly the same way
exterior calculus deals with forms. It all seems a little silly to ask
which is better when they are simply duals of each other. It wouldn't be
hard to spell out the direct mappings of each operation in the two, but
that can be left as an exercise for the reader :) Besides, Urs has already
done most of the work :)
Eric
>A point no one seems to have brought up in these threads (except indirectly
>in John Baez's comments about ``Young Tableaux'') is that differential forms
>qua differential forms (i.e., nothing but purely antisymmetric tensors)
>are simply too impoverished a language to express physics in. You _still_
>need to introduce vectors and contractions, and to smuggle the metric
>and the ``inner product'' in through the back door via the Hodge dual ---
>and even all that is not enough: To express things like the energy-
>momentum tensor, one needs to introduce ``bastard'' constructions like
>``vector-valued one-forms,'' &c., &c.
What I think is a more fundamental distinction
than that between <forms> and <everything else>
is that between *topology* and *geometry*,
in the sense defined on another thread.
Any spinor or tensor with various values
can be dealt with in differential topology,
but there isn't much that you can *do* with them:
contraction of indices of opposite variance,
differentiation and integration of forms,
Lie differentiation, and bracketing of multivectors
(I've doubtless forgotten something).
Especially in the quest for background free theories,
I find it interesting to see what can be done
using only differential topology.
Then when differential geometry is needed,
I like to know what geometric structures are required.
Is a metric needed? a symplectic form? a volume form?
an orientation? a complex structure? nondegeneracy?
Then I know what background is required for a theory to work
and how the theory might couple to spacetime geometry
in a broader theory that has less background structure.
You might notice that 2 of the geometric structures given as examples,
a symplectic form and a volume form, are differential forms.
So I don't see forms as any better or worse than, say, metrics.
It's the imposition of a *particular* structure on the manifold
that makes me sit up and notice <Aha! background structure!>.
The form (^_^) that the structure takes is not what matters.
The geometric calculus of Hestenes and his crowd
uses the fixed background structure of a metric.
That's fine if that's what a theory has.
General relativity doesn't have this (as a fixed background),
so I a priori question the usefulness of GC in that context.
(Studying a *particular*solution* of GR is another matter!)
A posteriori I have no particular opinion, since I haven't tried it
and so there is no posterior.
>The bottom line is: We live in a Universe that _appears_ to have a
>metric tensor and inner products built into it at a very fundamental
>level, so straitjacketing oneself by using a mathematical language
>that attempts to ``hide the metric tensor under the carpet'' and only
>refer to it via awkward circumlocutions involving the Hodge dual,
>contraction identities that take arbitrary vector fields as arguments,
>``vector-valued forms,'' &c., &c. is simply unnatural.
I agree -- but nor would I straitjacket myself to assuming that
the metric has more fundamental a place in the world than any other field.
In some studies, this is appropriate, just as sometimes it's appropriate
to set the mass of the electron to 1. In other cases, not.
If we all learn anything from these discussions,
then I hope that it would be Ecclesiastes 3:1.
-- Toby
to...@math.ucr.edu
>What I think is a more fundamental distinction
>than that between <forms> and <everything else>
>is that between *topology* and *geometry*,
>in the sense defined on another thread.
Right!
>Any spinor or tensor with various values
>can be dealt with in differential topology,
A small criticism of this point will be made below.
>but there isn't much that you can *do* with them:
>contraction of indices of opposite variance,
>differentiation and integration of forms,
>Lie differentiation, and bracketing of multivectors
>(I've doubtless forgotten something).
It's sort of hard to write down a complete list of
things you can do with tensors without knowing exactly
what "things you can do" means.
However, if we make this notion precise in a certain
way, we can develop a *complete classification* of what
we can do with tensors in the context of differential
topology.
To do this, first we need the concept of a "god-given
vector bundle". Crudely speaking, a god-given vector
bundle is a vector bundle that we can cook up on a
manifold using no extra structure on our manifold.
I'll let Toby figure out how to make this into a precise
definition! With the right precise definition, the
following stuff is true:
It turns out that any representation of GL(n) gives
a recipe for constructing a god-given vector bundle
over any n-dimensional manifold. For example, the
trivial representation gives the trivial bundle.
The fundamental representation gives the tangent
bundle. The dual of the fundamental representation
gives the cotangent bundle. And so on.
Perhaps Toby can sketch this "and so on"! The basic
idea is that all the ways of getting new representations
from old ones yield ways of getting new vector bundles
from old ones.
It also turns out that *all* god-given vector bundles
come from representations of GL(n) by this trick.
Rather obviously, then, all god-given vector bundles
are direct sums of vector bundles corresponding to
*irreducible* representations of GL(n). These were
classified by Hermann Weyl, and the Wizard sketched
the classification a while back in this thread.
This stuff is the secret reason the Wizard was so
eager to force Eric Forgy to learn about Young diagrams -
a trick for classifying irreducible representations of
GL(n) and other related groups. Once Eric understands
this classification, he will understand all the god-given
vector bundles! The differential forms he's so fond of
are on the list, but there are many more. As the Wiz
hinted, they all correspond to densitized tensors and
"pseudo" densitized tensors.
With a bit more work, we can also classify all the
god-given maps *between* god-given vector bundles.
Toby can probably hazard a guess as to what's going
on here, since he lives and breathes category theory -
and category theory says maps between things are just
as important as things!
If we take these maps as our definition of "all the things
we can do with tensors" in the context of differential
topology, we are then in the happy position of having a
THEOREM which lists all the things we can do!
Of course, integration is not one of the things on this list.
The list only includes "local" things that we can do.
We could try to prove a more general theorem, though.
If anyones wants to learn more about this, here is a book
about it that one can download for free!
Natural operations in differential geometry
by Ivan Kolar, Jan Slovak and Peter W. Michor
http://www.emis.de/monographs/KSM/
Here's a bit of the introduction:
The aim of this book is threefold:
First it should be a monographical work on natural bundles and
natural operators in differential geometry. This is a field which
every differential geometer has met several times, but which is not
treated in detail in one place. Let us explain a little, what we
mean by naturality.
Exterior derivative commutes with the pullback of differential
forms. In the background of this statement are the following general
concepts. The vector bundle $\La^kT^*M$ is in fact the value of a
functor, which associates a bundle over $M$ to each manifold $M$ and
a vector bundle homomorphism over $f$ to each local diffeomorphism
$f$ between manifolds of the same dimension. This is a simple
example of the concept of a natural bundle. The fact that the
exterior derivative $d$ transforms sections of $\La^kT^*M$ into
sections of $\La^{k+1}T^*M$ for every manifold $M$ can be expressed
by saying that $d$ is an operator from $\La^kT^*M$ into
$\La^{k+1}T^*M$. That the exterior derivative $d$ commutes with
local diffeomorphisms now means, that $d$ is a natural operator from
the functor $\La^kT^*$ into functor $\La^{k+1}T^*$. If $k>0$, one
can show that $d$ is the unique natural operator between these two
natural bundles up to a constant. So even linearity is a consequence
of naturality. This result is archetypical for the field we are
discussing here. A systematic treatment of naturality in
differential geometry requires to describe all natural bundles, and
this is also one of the undertakings of this book.
[Here $\La^kT^*M$ is the bundle whose sections are k-forms, as
will be obvious to anyone who spends all day writing in TeX.]
In fact, this introduction gives some really big hints
concerning the questions I wanted Toby to answer! But
I hope I managed to get him to think a bit before seeing
these hints. :-)
>Especially in the quest for background free theories,
>I find it interesting to see what can be done
>using only differential topology.
Me too!
>Then when differential geometry is needed,
>I like to know what geometric structures are required.
>Is a metric needed? a symplectic form? a volume form?
>an orientation? a complex structure? nondegeneracy?
Right!
Now for my promised "small criticism":
Spinor bundles are *not* on the list of god-given vector
bundles. This is related to the fact that spinors form
a representation of Spin(n), not GL(n). So, we need
some extra structure - in fact some extra "stuff" -
on our manifold before we can define spinors on it.
Toby knows the technical category-theoretic definition of
"structure" and "stuff", which we discussed with James Dolan
a long time back here on s.p.r.. So, he'll know what subtle
point I'm making above. Those who don't should read the thread
entitled "Just Categories Now", starting here:
http://groups.google.com/groups?hl=en&selm=72pusp%243o5%241%40pravda.ucr.edu
>Then I know what background is required for a theory to work
>and how the theory might couple to spacetime geometry
>in a broader theory that has less background structure.
Or background *stuff*!
>You might notice that 2 of the geometric structures given as examples,
>a symplectic form and a volume form, are differential forms.
>So I don't see forms as any better or worse than, say, metrics.
>It's the imposition of a *particular* structure on the manifold
>that makes me sit up and notice <Aha! background structure!>.
Right - but we *can* talk in a rigorous way about the concept of "more"
structure, and try to do with as little as possible.
>The geometric calculus of Hestenes and his crowd
>uses the fixed background structure of a metric.
>That's fine if that's what a theory has.
... but awful otherwise!
It's really the inflexibility of their approach that makes
it so limiting.
>If we all learn anything from these discussions,
>then I hope that it would be Ecclesiastes 3:1.
Hear, hear. Actually what I've mainly learned so far is
which verse "Ecclesiastes 3:1" must be! I know what you must
be referring to, even though I hadn't known the verse number....
Is velocity is a vector, (=1 form?) and mass flux a Densitized Pseudo
Twisted 2 Form (dpt2f)?
Mass flux times velocity gives energy density, a scalar density.
If I look at this in the language of Clifford algebra, I think of
it as follows:
e_i vector (eg E, v)
e_ij bivector or pseusovector(eg B)
e_ij/e_ijk dpt2f (eg D, or rho*v )
e_i * e_jk / e_ijk scalar density (eg E*D, or v*rho*v)
wrong?
Gerard
>It turns out that any representation of GL(n) gives
>a recipe for constructing a god-given vector bundle
>over any n-dimensional manifold.
This is true.
>It also turns out that *all* god-given vector bundles
>come from representations of GL(n) by this trick.
This is false: as James Dolan reminded me, there are
also lots of other god-given vector bundles, like jet
bundles.
(The fiber at p of a jet bundle over M transforms in a way
which depends, not just on the first derivative df(p) of
the diffeomorphism f: M -> M, but also on the higher
derivatives. To understand god-given vector bundles like
this, we at least need to understand the representations
of certain groups which have GL(n) as a quotient.)
>If anyone wants to learn more about this, here is a book
>about it that one can download for free!
>
>Natural operations in differential geometry
>by Ivan Kolar, Jan Slovak and Peter W. Michor
>http://www.emis.de/monographs/KSM/
Obviously I need to read this book! I know they talk
a lot about jet bundles, but now I realize I need to
better understand the complete classification of
god-given vector bundles.