Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Reciprocal vs dual vector spaces

302 views
Skip to first unread message

fo...@my-deja.com

unread,
Oct 2, 2000, 3:00:00 AM10/2/00
to
Hello,

I've been trying to come to terms with the idea of contravariant and
covariant tensors. My understanding is that a tangent vector is a
contravariant tensor and a 1-form is a covariant tensor. Given a metric
tensor, there is a unique 1-form corrensponding to each tangent vector,
and vice versa. If X = X^i e_i is a tangent vector, then the 1-form X_i
dx^i corresponds to X with X_i = g_{ij} X^j.

I was happy with this result until I came across these reciprocal
vectors and now I'm all confused again :)

In 3D, with a basis e_1,e_2,e_3, a reciprocal basis may be defined as

e^i = (e_j x e_j)/[e_i.(e_j x e_k)]

and cyclic permutations of (ijk).

These reciprocal bases satisfy the relation

e^i.e_j = delta^i_j.

A vector may be expressed using the basis e_i or the reciprocal basis
e^i, such that

X = X^i e_i = X_i e^i.

So what are the relations between X^i and X_i? They are exactly the same
as the relation between contravariant and covariant tensors!?!

X_i = g_{ij} X^j

So what is really going on here? What is the relationship between a dual
vector space V^* of 1-forms and just the vector space V written in the
reciprocal basis? It seems like the two are almost the same thing for
any practical purposes. What am I missing?

It almost looks like you can write

dx^i =~ (e_j x e_k)/[e_i.(e_j x e_k)]

but that really seems weird.

I think of the dx^i as linear functionals satisfying

dx^i(e_j) = delta^i_j

but these reciprocal basis satisfy

e^i.e_j = delta^i_j.

There's got to be something going on here.

Thanks to anyone who can help me clear this up.

Eric

fo...@my-deja.com

unread,
Oct 3, 2000, 3:00:00 AM10/3/00
to
While I was waiting for this post to appear, I made some progress and
can probably refine my question a little bit.

If you have a metric tensor g, then any tangent vector X determines a
unique 1-form X^* = g(X,.). In coordinates

X = X^i e_i

X^* = X_i dx^i

and

X^*(Y) = g(X,Y) = X^i Y^j g(e_i,e_j) = X^i Y^j g_{ij}

I was curious if these reciprocal basis e^i could be defined for any
finite dimension and of course they can by defining

e^i = g^{ij} e_j

So that e^i are also tangent vectors.

Then defining a basis for 1-forms in the usual way,

dx^i(e_j) = delta^i_j,

then

dx^i(X) = dx^i(X^j e_j) = X^j dx^i(e_j) = X^j delta^i-J = X^i

But also,

g(e^i,e_k) = g^{ij} g(e_j,e_k) = g^{ij} g_{jk} = g^i_k = delta^i_k

so that

g(e^i,X) = X^j g(e^i,e_j) = X^j delta^i_j

and

dx^i(X) = g(e^i,X)

and any tangent vector can be expressed in terms of either e^i or e_i
via

X = X^i e_i = X_i e^i = X_i (g^{ij} e_j) = X^j e_j

where X^j = g^{ij} X_i.

So it seems that these tangent vectors, when expressed in terms of the
reciprocal basis, transform just like 1-forms.

This question came up after reading, "Vector and Tensor Analysis with
Applications", Borisenko and Tarapov. Throughout the entire book they
never once introduce linear functionals. They distinguish covariant and
contravariant vectors according to which basis they are represented in

X = X^i e_i <- contravariant

X = X_i e^i <- covariant

although both describe the same tangent vector. Borisenko and Tarapov
then go on to introduce higher degree contra and covariant tensors as
well as mixed tensors, but all the while the only distinguishing
property between covariant and contravariant is in the basis used,
whereas regardless of the basis each description described the SAME
tangent vector in the SAME tangent space.

To me, this makes it seem as though it is really not even necessary to
talk about linear functionals at all. Everything in tensor analysis can
be done with the one tangent space but expressed using different bases.
Is that right? Could it be that after trying to teach myself about
differential forms for 2 years that we don't even need to consider them
if we consider these reciprocal bases??? Am I overexagerating (I tend to
hit the panic button too early)?

Thanks for any insights,
Eric

George Jones

unread,
Oct 4, 2000, 3:00:00 AM10/4/00
to
In article <8r8vl1$oum$1...@nnrp1.deja.com>, fo...@my-deja.com wrote:

> Hello,
>
> I've been trying to come to terms with the idea of contravariant and
> covariant tensors. My understanding is that a tangent vector is a
> contravariant tensor and a 1-form is a covariant tensor. Given a metric
> tensor, there is a unique 1-form corrensponding to each tangent vector,
> and vice versa. If X = X^i e_i is a tangent vector, then the 1-form X_i
> dx^i corresponds to X with X_i = g_{ij} X^j.
>
> I was happy with this result until I came across these reciprocal
> vectors and now I'm all confused again :)

Let V be an n-dimensional real vector space. Then V*, the dual
space to V, is defined as the set of linear maps from V to R,
i.e., V* = {w : V -> R | w(u + cv) = w(u) + c v(u) }, which,
with addition and scalar multiplication defined in an obvious
manner, is a vector space.

If {e_1, ... , e_n} is a basis for V, then {w^1, ... , w^n}
defined by:

1) each w^i is a linear map from V to R;
2) w^i(e_j) = delta^i_j

is a basis for V* (exercise).

So, given a basis for V, a dual basis for V* can always be
found, and this construction is independent of any "metric" on
V. This constuction defines an isomorphism from V to V* via
v = v^i e_i in V goes to v^i w^i in V*. While this isomorphism
is independent of any "metric", it is very dependent on which
basis for V is used.

Now suppose V has a "metric", i.e., there is a non-degenerate
bilinear form g : VxV -> R (think dot product in 3D, Minkowski
"inner product" in special relativity). This give rise to a
natural (basis independent) isomorphism between V and V* by:
for any v in V* define w in V* by w(v) = g(v,w). Remember, w
is a real-valued function on V, so this works out nicely.
v |-> w defines an isomorphism from V to V* (exercise).

Now, what about reciprocal vectors? I believe you'll find that
they pop up as follows.

Pick a basis for V. Map it to a basis for V* using the first
(basis dependent, "metric" independent) isomorphism. Now map
the resulting basis for V* back onto a basis for V using the
second (basis independent, metric dependent) isomorphism. The
resulting basis for V is the reciprocal basis. So we have
V -> V* -> V, where the second arrow is not, in general, the
inverse of the first!

If the details in this explanation are too sparce, I'd be
glad to fill them in.

Regards,
George


Greg Egan

unread,
Oct 4, 2000, 3:00:00 AM10/4/00
to
In article <8rdflf$cvc$1...@nnrp1.deja.com>, fo...@my-deja.com wrote:

[snip]

> To me, this makes it seem as though it is really not even necessary to
> talk about linear functionals at all. Everything in tensor analysis can
> be done with the one tangent space but expressed using different bases.
> Is that right? Could it be that after trying to teach myself about
> differential forms for 2 years that we don't even need to consider them
> if we consider these reciprocal bases??? Am I overexagerating (I tend to
> hit the panic button too early)?

I gave a longish reply to your first post, telling you a lot of things I
can now see that you already knew ... so I'll offer a short answer to this
question.

No, you *don't* want to replace contravariant vectors throughout
differential geometry with the particular covariant vectors that the
metric at hand maps them to. This would be horrible beyond belief, for
all kinds of reasons. The most obvious one is that you sometimes want to
do differential geometry without any metric at all! And other times, even
when there's always a metric around, you might want to think about a
variety of different metrics.

But even in situations where you're guaranteed a single metric, I really
think this would be begging for trouble and confusion. People can get
away with doing lots of physics in flat spacetime while glossing over the
distinction, but *discovering* the distinction sheds light even on the
things you can get away with. Once you've seen Maxwell's Equations
converted from the grungy vector-based version to the one based on
2-forms, you'll never want to go back.

If your tensor analysis textbooks are putting you off, find a nicer one.
I had a lot of fun learning this stuff from Baez & Munian's "Gauge Fields,
Knots and Gravity" and Misner, Thorne & Wheeler's "Gravitation", which
teach the subject incidentally to teaching lots of interesting physics.
When I'd tried learning from a book about nothing but tensors, it was very
hard to stay motivated or to see the point of any of it.

--
Greg Egan

Email address (remove name of animal and add standard punctuation):
gregegan netspace zebra net au


Greg Egan

unread,
Oct 4, 2000, 3:00:00 AM10/4/00
to
In article <8r8vl1$oum$1...@nnrp1.deja.com>, fo...@my-deja.com wrote:

[snip]

> I think of the dx^i as linear functionals satisfying
>
> dx^i(e_j) = delta^i_j
>
> but these reciprocal basis satisfy
>
> e^i.e_j = delta^i_j.
>
> There's got to be something going on here.

Here are a couple of remarks that might help. First, you can define a
basis of 1-forms that satisfy:

> dx^i(e_j) = delta^i_j

without reference to a metric. The equation you wrote is taken as the
*definition*: knowing that this is true gives you the value of the 1-form
on any vector at all, by linearity:

dx^i(V^j e_j) = V^i

However, if you *do* have a metric, you can also define a set of 1-forms
{f^i} that are related to your covariant vectors, {e_j}, by a different
equation:

f^i(e_j) = g(e_i, e_j)
= g_{ij}

which yields a result on any vector of:

f^i(V^j e_j) = g_{ij} V^j

Note that this makes f^i a kind of surrogate for "taking the
metric-dependent dot product with e_i", because:

e_i.(V^j e_j) = (e_i.e_j) V^j
= g_{ij} V^j

Also note that we can express the f^i in terms of the {dx^i} very easily,
because the coordinates of a 1-form in the {dx^i} basis are just the value
it gives on the vectors of the {e_j} basis:

(f^i)_j = f^i(e_j)
= g_{ij}

or, because it's always nicer (and safer!) to write the thing itself
rather than its coordinates:

f^i = g_{ij} dx^j

So these two bases, {dx^i} and {f^i}, will be the same *only if* the
metric is a Kronecker delta in the vector basis {e_j} you're using. In
the more general case, they will be different.

The metric-free definition, dx^i(e_j) = delta^i_j, just defines a basis of
1-forms that "plucks out" the i-coordinate of any vector that's been
expressed in your chosen vector basis. And vice-versa: the coordinates
h_j of any 1-form h in the {dx^i} basis are just the values that h gives
on the vectors of the {e_j} basis. This makes it very simple to work with
{e_j} and {dx^i}. IIRC, they're known as "complementary bases". If your
bases are derived from a coordinate system -- where e_j=@_j is the
covariant tangent vector corresponding to the partial derivative with
respect to the coordinate x^j, and dx^i is the 1-form you get as the
differential of the coordinate function x^i -- then {@_j} and {dx^i} are
complementary bases. If you have a vector basis {e_j} that might or might
not be related to coordinates, it's common practice to write the
complementary basis as {e^i}, but I've refrained from doing that here,
because you're using {e^i} to talk about the reciprocal vectors.

If you're into picturing these things geometrically: if your vector basis
{e_j} is a coordinate-basis {@_j}, which you picture as taking partial
derivatives with respect to x_j along curves with all coordinates other
than x_j fixed, then these dx^i can be pictured as planes of *constant*
x_i. (In dimension n, these "planes" will have dimension (n-1), but I'll
stick to calling them planes on the assumption that we're picturing
everything with n=3 for the moment.) So they're naturally complementary
concepts, defined in terms of a coordinate system: {@_j} are geometrical
objects where only x_j *changes*, and {dx^i} are geometrical objects where
only x_i is *fixed*. Note that the curve for @_j and the plane for dx^j
need not be orthogonal; they can be defined without having any metric
around at all, so it might not even be meaningful to wonder if they are or
aren't orthogonal, but if there *is* a metric, then it's certainly
possible that they *won't* be orthogonal. It's just a matter of whether
or not your coordinates form a nice orthogonal grid, once you introduce
the metric.

In contrast, the planes of the 1-forms defined with the help of the metric,
{f^i}, will *always* be orthogonal to the curves of the {@_i}, in the
sense that they will yield a result of zero on any vector orthogonal to
@_i. The "planes" for a 1-form in the geometric picture represent the
null space of that 1-form, and since f^i(V)=@_i.V, this will be zero if V
is orthogonal to @_i.

So what exactly are these "reciprocal vectors", that satisfy:

> e^i.e_j = delta^i_j.

They're what you get by taking the first notion of a basis of 1-forms,
{dx^i}, and then "raising an index" with the metric to turn them into
covariant vectors! The metric can be put in a form that acts on two
1-forms to give you a number, just as the more normal way of writing it is
as a tensor that acts on two covariant vectors. You write both as "g",
but you write the coordinates with either upper or lower indices.

If you work in a basis of {e_j} for vectors and {dx^i} for 1-forms, then
the coordinates of the metric are:

g_{ij} = g(e_i,e_j)

g^{ij} = g(dx^i,dx^j)

and you have the nice property that:

g^{ik} g_{kj} = delta^i_j

i.e. the matrices of coordinates are just the inverses of each other.

Given any vector V = V^j e_j, you can use the g_{ij} to "lower an index"
to produce a 1-form that acts as a surrogate for the dot product with V:

(1-form version of V) (W^k e_k) = g_{jk} V^j W^k = V.W

where the "1-form version of V" has coordinates (in the basis {dx^i})
given by its values on {e_i}:

(1-form version of V)_i = (1-form version of V) (e_i)
= g_{ji} V^j

or writing out the 1-form itself:

(1-form version of V) = g_{ji} V^j dx^i

which is just a generalisation of how we got the {f^i}, which are "1-form
versions of" e_i.

Similarly, given any 1-form h = h_i dx^i, you can use the g^{ij} to "raise
an index" to produce a vector, whose coordinate are:

(vector version of h)^j = g^{ji} h_i

What happens if you take the dot product of the "vector version of h" with
some vector W?

(vector version of h).W = g((vector version of h),W)
= g_{jk} (vector version of h)^j W^k
= g_{jk} g^{ji} h_i W^k
= delta^i_k h_i W^k
= h_k W^k
= h(W)

So raising an index just reverses the business of finding a 1-form that
acts like the dot product with a certain vector: it gives you a vector
where taking the dot product acts like the original 1-form.

The "reciprocal vectors" will thus have coordinates:

(e^i)^j = g^{ji}

and can be written in terms of the original vector basis as:

e^i = g^{ji} e_j

The formula you give for them:

> In 3D, with a basis e_1,e_2,e_3, a reciprocal basis may be defined as

> e^i = (e_j x e_j)/[e_i.(e_j x e_k)]

> and cyclic permutations of (ijk).

is just a different way of calculating the inverse matrix which gives
g^{ji}, but it will only work in three dimensions. In any number of
dimensions, though, there's nothing to stop you having a basis of vectors
that satisfy:

e^i.e_j = delta^i_j

since all you do is use:

e^i = g^{ki} e_k

e^i.e_j = g(e^i,e_j)
= g(g^{ki} e_k, e_j)
= g^{ki} g(e_k, e_j)
= g^{ki} g_{kj}
= delta^i_j

Toby Bartels

unread,
Oct 4, 2000, 3:00:00 AM10/4/00
to
Eric wrote:

>While I was waiting for this post to appear, I made some progress and
>can probably refine my question a little bit.

Good, then I don't have to write as much.

>If you have a metric tensor g, then any tangent vector X determines a

>unique 1-form X^* = g(X,.). In coordinates X = X^i e_i X^* = X_i dx^i.

>I was curious if these reciprocal basis e^i could be defined for any

>finite dimension and of course they can by defining e^i = g^{ij} e_j.

Exactly. The formula

>>e^i = (e_j x e_k)/[e_i.(e_j x e_k)]

from your last post just gives this in terms of . and x (I think).

>So that e^i are also tangent vectors.

Yes.

>Any tangent vector can be expressed in terms of either e^i or e_i via
>X = X^i e_i = X_i e^i = X_i (g^{ij} e_j) = X^j e_j, where X^j = g^{ij} X_i.

Yes.

>So it seems that these tangent vectors, when expressed in terms of the
>reciprocal basis, transform just like 1-forms.

Their coordinates transform like the usual coordinates of 1forms,
instead of like the usual coordinates of vector fields,
but they are not themselves 1forms.
X is X, and X* is X*, even though their coordinates are the same.

But do we really have to be so pedantic? As you say:

>To me, this makes it seem as though it is really not even necessary to
>talk about linear functionals at all. Everything in tensor analysis can
>be done with the one tangent space but expressed using different bases.
>Is that right? Could it be that after trying to teach myself about
>differential forms for 2 years that we don't even need to consider them
>if we consider these reciprocal bases??? Am I overexagerating (I tend to
>hit the panic button too early)?

*If* there is a fixed metric g on a manifold,
then g gives an isomorphism between vector fields and 1forms.
So, if you want to treat them as the same thing, you can get away with it.
If you're doing, say, quantum field theory, this will work out fine.

However, if you're doing general relativity (where the metric may vary)
or Hamiltonian analysis from the point of view of symplectic geometry
(where there is no metric), then this will not work,
and you'll have to distinguish vector fields and 1forms.
Then linear functionals are truly necessary.


-- Toby
to...@math.ucr.edu


Pierre Asselin

unread,
Oct 4, 2000, 3:00:00 AM10/4/00
to
fo...@my-deja.com wrote:

> I've been trying to come to terms with the idea of contravariant and
> covariant tensors. My understanding is that a tangent vector is a
> contravariant tensor and a 1-form is a covariant tensor. Given a metric
> tensor, there is a unique 1-form corrensponding to each tangent vector,
> and vice versa. If X = X^i e_i is a tangent vector, then the 1-form X_i
> dx^i corresponds to X with X_i = g_{ij} X^j.

So far, so good. I'm with you.


> I was happy with this result until I came across these reciprocal
> vectors and now I'm all confused again :)
>

> In 3D, with a basis e_1,e_2,e_3, a reciprocal basis may be defined as
>
> e^i = (e_j x e_j)/[e_i.(e_j x e_k)]
>
> and cyclic permutations of (ijk).

Ah. That's because you can sometimes switch between covariant and
contravariant *without* a metric. Specifically, let A^{ij} be a
contravariant tensor of rank 2 whose entries form an invertible
matrix. Let B = inverse(A). Then the components B_{ij} of B form
a covariant tensor!

That's what the reciprocal basis is. Two points of view:

1) No metric. You start with d linarly independent vectors e_i ,
d= dimension of the space. Then any vector x is expressible
uniquely as a linear combination of the e_i (wherein lurks a
matrix inversion). Define functions \omega^j by
\omega^j(x) == x's coordinate j in the expansion. The
\omega^j are 1-forms, and they are related to the e_i by

\omega^j(e_i) = \delta_i^j

2) Throw in a metric. Then vectors and 1-forms are the same thing.
Convert the \omega^j above to vectors E_j as above. These are
the reciprocal vectors, and they obey

e_i . E_j = \delta_{ij}

Your formula, which I rewrite as E_i = (e_j x e_k)/[e_i.(e_j x e_k)],
does the matrix inversion by Kramer's rule.

--
Pierre Asselin
Westminster, Colorado


Eric Forgy

unread,
Oct 4, 2000, 3:00:00 AM10/4/00
to
Greg Egan wrote:

> I gave a longish reply to your first post, telling you a lot of things I
> can now see that you already knew ... so I'll offer a short answer to this
> question.

Thanks :)

> No, you *don't* want to replace contravariant vectors throughout
> differential geometry with the particular covariant vectors that the
> metric at hand maps them to. This would be horrible beyond belief, for
> all kinds of reasons. The most obvious one is that you sometimes want to do
> differential geometry without any metric at all! And other times, even when
> there's always a metric around, you might want to think about a variety of
> different metrics.

Whoa. That sounds a bit weird. Differential geometry without a metric? Are you
referring to differential topology or some other thing I have never heard of?

I've never heard of a variety of metrics. Is there such thing as a single
manifold with two distinct metrics on it? What would that mean?

> But even in situations where you're guaranteed a single metric, I really
> think this would be begging for trouble and confusion. People can get
> away with doing lots of physics in flat spacetime while glossing over the
> distinction, but *discovering* the distinction sheds light even on the
> things you can get away with. Once you've seen Maxwell's Equations
> converted from the grungy vector-based version to the one based on
> 2-forms, you'll never want to go back.

You just hit the reason I'm bothering with all this. My research is in
computational electromagnetics CEM :) I've joined an elite group of
researchers who are pushing forms as the proper language for doing CEM. The
resistance is surprisingly strong. The better for me I suppose.

But even then, it seems like you can still do away with linear functionals and
maintain the beauty of EM theory in the language of forms. You'd just need an
exterior algebra. I'm just brain storming, but you could define

e^i /\ e^j = e^i tens e^j - e^j tens e^i

and

df = d_i f e^i (d_i is partial with respect to x^i)

(looking at this post-post df would just be the gradient in any general
coordinate system. grad(f) = g^{ij} d_i f e_j = d_i f (g^{ij} e_j) = d_i f
e^i)

dA = d(A_j e^j) = (dA_j) /\ e^j = d_iA_j e^i /\ e^j

and d^2 = 0 so

F = dA

dF = 0.

You can always decompose F into spacespace and spacetime parts

F = B + E /\ e^0

Then, I like to do something that may be mathematically ugly, but I define

d = d_i e^i /\ + d_0 e^0 /\ = d_s + d_t

dF
= (d_s + d_t)(B + E /\ e^0)
=d_s B + (d_s E) /\ e^0 + d_t B
= d_s B + (d_s E + d_0 B) /\ e^0
= 0.

(Actually I think the ugliness of d = d_s + d_t is just because of the equally
ugly splitting of F into spacespace and spacetime components.)

So

d_s B = 0

and

d_s E + d_0 B = 0.

Then introducing the Hodge wouldn't be too difficult without using linear
functionals and then you'd get an adoint exterior derivative del which maps
p-vectors to (p-1)-vectors leading to

del F = j.

It seems to me that no beauty is lost by avoiding the use of linear
functionals. The only thing is that you need an exterior algebra on your
tangent space. (Assuming you have a metric, but as far as I'm aware no metric
= no physics. I could be wrong.)

> If your tensor analysis textbooks are putting you off, find a nicer one.
> I had a lot of fun learning this stuff from Baez & Munian's "Gauge Fields,
> Knots and Gravity" and Misner, Thorne & Wheeler's "Gravitation", which teach
> the subject incidentally to teaching lots of interesting physics. When I'd
> tried learning from a book about nothing but tensors, it was very hard to
> stay motivated or to see the point of any of it.

I would really like to get a hold of "Gauge Fields, Knots, and Gravity" but it
is never available (it's charged out) in the library. I could probably track
it down through other universities in our system (U of I is REALLY nice that
way :)) I think Frankel, "Geometry of Physics" is very nice and I also like
Nakahara, "Geometry, Topology, and Physics". It might be blasphemy, but I
don't care for MTW "Gravitation" at all :)

Eric


Eric Forgy

unread,
Oct 4, 2000, 3:00:00 AM10/4/00
to
Toby Bartels wrote:

> But do we really have to be so pedantic? As you say:
>
> >To me, this makes it seem as though it is really not even necessary to
> >talk about linear functionals at all. Everything in tensor analysis can
> >be done with the one tangent space but expressed using different >bases. Is
> that right? Could it be that after trying to teach myself about
> >differential forms for 2 years that we don't even need to consider >them if
> we consider these reciprocal bases??? Am I overexagerating >(I tend to hit the
> panic button too early)?
>
> *If* there is a fixed metric g on a manifold,
> then g gives an isomorphism between vector fields and 1forms.
> So, if you want to treat them as the same thing, you can get away with it. If
> you're doing, say, quantum field theory, this will work out fine.
>
> However, if you're doing general relativity (where the metric may vary)
> or Hamiltonian analysis from the point of view of symplectic geometry
> (where there is no metric), then this will not work,
> and you'll have to distinguish vector fields and 1forms.
> Then linear functionals are truly necessary.

Thanks, but let me understand something. The reciprocal basis just depends on
whether or not there IS a metric and doesn't require that the metric be
constant. In general relativity, there is a metric, but it might not be known,
but you can still define reciprocal bases in terms of this unknown metric. Then
wouldn't everything fall through? Should I try to convince myself that linear
functionals are truly necessary in general relativity? It seems to me that even
in all its glory, general relativity can still be expressed in terms of just
tangent vectors and tensor products thereof. I know there probably is no point
for doing so, but I think it is possible to do so and I'm trying to clean up my
intuition.

I'm on much weaker ground when it comes to symplectic geometry, but doesn't a
symplectic manifold requires a nondegenerate symplectic strucre w. This pretty
much takes the place of a metric, doesn't it? Could you define a reciprocal
basis on a symplectic manifold?

dx^i(e_j) = delta^i_j

w(e^i,e_j) = delta^i_j

Hmm... now I am rambling :)

w = w_{ij} e^i /\ e^j = w^{ij} e_i /\ e_j

What if we define

e^i = w^{ij} e_j,

which is possible because w is nondenegerate. Then

w(e^i,e_k) = w^{ij} w(e_j,e_k) = w^{ij} w_{jk} = delta^i_k

VOILA!

Does that work out? :)

Then

dx^i = w(e^i,.)

dx^i(e_j) = w(e^i,e_j) = delta^i_j

So, unless I am horribly mistaken, even in symplectic geometry where you have a
nongenerate symplectic structure w, you can avoid linear functionals and use a
reciprocal basis on a symplectic manifold.

I am tempted to conclude that even in general relativity AND in symplectic
geometry, you can avoid the introduction of linear functionals and describe all
the physics in terms of tangent vectors described in terms of either a basis or
a reciprocal basis.

Sorry if I'm being troublesome, but I always learn more by arguing a wrong
point to death. By the time I realize I am wrong, then I have learned a lot
in the process :) (on rare occasions I am actually right)

Eric


Greg Egan

unread,
Oct 5, 2000, 3:00:00 AM10/5/00
to
In article <gregegan-041...@dialup-m1-20.perth.netspace.net.au>,
greg...@netspace.zebra.net.au (Greg Egan) wrote:

> [snip]



> The metric-free definition, dx^i(e_j) = delta^i_j, just defines a basis of
> 1-forms that "plucks out" the i-coordinate of any vector that's been
> expressed in your chosen vector basis. And vice-versa: the coordinates
> h_j of any 1-form h in the {dx^i} basis are just the values that h gives
> on the vectors of the {e_j} basis. This makes it very simple to work with
> {e_j} and {dx^i}. IIRC, they're known as "complementary bases".

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

I didn't recall correctly. As George Jones pointed out, they're called
"dual bases". (Complementary bases are something entirely different.)

Toby Bartels

unread,
Oct 5, 2000, 3:00:00 AM10/5/00
to
Greg Egan wrote in very small part:

>However, if you *do* have a metric, you can also define a set of 1-forms
>{f^i} that are related to your covariant vectors, {e_j}, by a different
>equation:

> f^i(e_j) = g(e_i, e_j)
> = g_{ij}

As this equation suggests, you really should write "f_i" instead of "f^i",
even thought the f_i are 1forms, not vector fields.
This goes along with the fact that the e^i are vector fields, not 1forms.
In fact, just as the e^i are the basis reciprocal to the e_i,
so the f_i are the basis reciprocal to the dx^i,
as the following equation shows:

> f^i = g_{ij} dx^j

Again, this looks better with "f_i" instead of "f^i".


-- Toby
to...@math.ucr.edu


John Baez

unread,
Oct 5, 2000, 3:00:00 AM10/5/00
to
In article <39DBE9AB...@uiuc.edu>, Eric Forgy <fo...@uiuc.edu> wrote:

>Greg Egan wrote:

>> No, you *don't* want to replace contravariant vectors throughout
>> differential geometry with the particular covariant vectors that the
>> metric at hand maps them to. This would be horrible beyond belief, for
>> all kinds of reasons. The most obvious one is that you sometimes want to do
>> differential geometry without any metric at all! And other times, even when
>> there's always a metric around, you might want to think about a variety of
>> different metrics.

>Whoa. That sounds a bit weird. Differential geometry without a metric? Are you
>referring to differential topology or some other thing I have never heard of?

There are lots of kinds of "geometry" besides geometry involving a
metric. Symplectic geometry! Poisson geometry! Contact geometry!
Complex geometry! Conformal geometry! All of these play a big role
in physics. Metrics are important, but they're not the be-all and
end-all of geometry. So except in specific contexts, it's bad to
unduly privilege them - much less to unduly privilege a single *one*.
It may be okay for what you're doing (classical electromagnetism on
Minkowski spacetime), but not for other things.

>I've never heard of a variety of metrics. Is there such thing as a single
>manifold with two distinct metrics on it? What would that mean?

Two different solutions of Einstein's equations of general relativity, for
example!

Remember, general relativity is not just the study of a *single* metric;
it's the study of *all* metrics satisfying Einstein's equations. Picking
one and setting it up as the "king" who decides how to identify covariant
vectors with contravariant vectors would be a thoroughly unnatural business -
just like picking one coordinate system and saying everybody has to always
work with this one.


Eric Forgy

unread,
Oct 6, 2000, 3:00:00 AM10/6/00
to
John Baez wrote:

> In article <39DBE9AB...@uiuc.edu>, Eric Forgy <fo...@uiuc.edu> wrote:
> >I've never heard of a variety of metrics. Is there such thing as a single
> >manifold with two distinct metrics on it? What would that mean?
>
> Two different solutions of Einstein's equations of general relativity, for
> example!

Oh man. I hope I misunderstood you or I really have no conceptual understanding for
this stuff.

> Remember, general relativity is not just the study of a *single* metric;
> it's the study of *all* metrics satisfying Einstein's equations. Picking
> one and setting it up as the "king" who decides how to identify covariant vectors
> with contravariant vectors would be a thoroughly unnatural business - just like
> picking one coordinate system and saying everybody has to always work with this
> one.

Take for example two coallescing black holes. There is just one solution/metric
that describes this, right? It wouldn't make sense to talk about two distinct
solutions/metrics for this binary system would it? The solution should be unique.
General relativity is not about studying the possible solutions (plural) to a
specific system (I hope).

I'm still getting my feet wet just learning differential geometry, so I haven't
even begun to study general relativity yet, but I don't understand this idea of
studying *all* metrics satisfying Einstein's equations. Is that a bit of category
theory sneaking in? (Something like objects -> solutions/metrics and morphisms ->
diffeomorphisms?) There is just one equation and its solution is unique (I hope).

The way I think about it is that I am a person. I live on some planet in some solar
system in some universe. As a member of this universe, I'd like to study the world
around me. Unless I incorporate general relativity into my study, I am missing a
big chunk of the picture. This is what I would consider "studying relativity." It
seems that what you are saying general relativity is about is studying all possible
solutions in all possible universes regardless of how close they are to the one I
live in. Is it just a different viewpoint?

I guess I can see how the study of all solutions to all possible universes could
come into play if you are after a theory of quantum gravity.

What I was trying to say about these reciprocal bases and doing away with linear
functionals is that if you are interested in studying a particular system, or
universe, or whatever. That system (classically at least) has just one metric
associated with it. Having this metric, regardless of whether it is constant or
not, allows you to express all of (Riemannian/Lorentz) geometry in terms of tangent
vectors and bases and reciprocal bases. To rephrase one of my last questions, given
a particular system (or universe), would it ever make sense to talk about two
distinct metric in that system? If so, what would it mean?

Eric


Paul Arendt

unread,
Oct 6, 2000, 3:00:00 AM10/6/00
to
In article <39DBEF6B...@uiuc.edu>, Eric Forgy <fo...@uiuc.edu> wrote:

>I'm on much weaker ground when it comes to symplectic geometry, but doesn't

>a symplectic manifold requires a nondegenerate symplectic structure w.

Yes.

> This pretty
>much takes the place of a metric, doesn't it?

Well, metrics are *symmetric* nondegenerate tensors with two indices,
while symplectic forms are *antisymmetric* nondegenerate TWTI.

There are some tricks you can do, however!

>Could you define a reciprocal
>basis on a symplectic manifold?
>
>dx^i(e_j) = delta^i_j

You can define a reciprocal basis like this on *any* ( - tangent and
cotangent space of a point on a - ) manifold. All you need is a
coordinate system {x^i} -- then {d/dx^j} (partials) are the
e_j you'd like to use above. The one-form basis is provided by
the coordinate differentials {dx^i}, just as you've written.

However, the need for a coordinate system with which to do this
should warn you that this choice *depends* upon the coordinate
system! This should cause you great alarm.

>w(e^i,e_j) = delta^i_j

** TILT! ** Poor little w doesn't know how to do this yet. It
can only eat things with lower indices, until you teach it to do
otherwise.

>w = w_{ij} e^i /\ e^j = w^{ij} e_i /\ e_j

Hmmm... let's try to make sense of this. The left thing is
a two-form. The middle thing is the expression of the two-form
in terms of a basis; OK! But the right-hand thing is the expression
of an antisymmetric (2,0) tensor (sum of bivectors) in terms of
a basis. So for the equals sign on the right, we get "ERROR
PARSING COMMAND."

>What if we define
>
>e^i = w^{ij} e_j,
>
>which is possible because w is nondenegerate.

With your definitions given above, w^{ij} is ready to
eat forms, not vectors (because that's what the {e_i}
can do). So you want w_{ij} there instead. (Besides,
w^{ij} is still undefined.)

This is, up to a sign convention which isn't set in stone
anywhere, exactly what goes on in Hamiltonian mechanics. Feed
a vector into a symplectic form, and out pops a one-form! But
the one-form is always orthogonal to the original vector, because
w is antisymmetric.

Soon you'll tire of writing all those indices, though.

>w(e^i,e_k) = w^{ij} w(e_j,e_k) = w^{ij} w_{jk} = delta^i_k
>
>VOILA!
>
>Does that work out? :)

Hmmmm... you're trying to *define* the thing on the far left
with this equation, I guess. But w^{ij} hasn't really been
defined yet, so there's a problem with everything but the
thing on the far right!

In short: w is ready to eat a vector and spit out a one-form
orthogonal to the vector in a 1-1 onto way (in finite dimensions,
at least). But without a metric, you've no way to make that
one-form into a vector again, except by undoing the same
operation.

If you have in addition a "complex structure" on each tangent
space, you can define something like a metric. A complex
structure is a (1,1) nondegenerate tensor J that satisfies

w(J a, J b) = w(a,b) for vectors a and b
(it preserves the symplectic structure)
and
J^2 = -1 (the n x n identity matrix)
(it acts like multiplication by i, but on a real vector space)

Then you can show that w(J a, b) is a symmetric (0,2) tensor.
If you pick your J right, it'll be nondegenerate, too! Ain't
that just like a metric? (In fact, if you've already *got*
a metric, you can define J(a) = g(w(a,.),.) and all will be
dandy.)

Another trick if you've got a metric already: define the sum
b = g + i w.
Since g is symmetric, and w is antisymmetric, what can you
say about b?

Both of these tricks are in Nakahara, I think (since you would
rather read that than the "phone book"). (Yes, disliking MTW
is considered blasphemy, except by at least one of our
moderators.)


Toby Bartels

unread,
Oct 6, 2000, 3:00:00 AM10/6/00
to
Eric Forgy wrote:

>Toby Bartels wrote:

>>However, if you're doing general relativity (where the metric may vary)
>>or Hamiltonian analysis from the point of view of symplectic geometry
>>(where there is no metric), then this will not work,
>>and you'll have to distinguish vector fields and 1forms.
>>Then linear functionals are truly necessary.

>Thanks, but let me understand something. The reciprocal basis just depends on
>whether or not there IS a metric and doesn't require that the metric be
>constant. In general relativity, there is a metric, but it might not be known,
>but you can still define reciprocal bases in terms of this unknown metric.

When I talked about the metric varying,
I didn't mean varying from point to point in spacetime;
rather, I meant varying in the sense that
more than a single metric was being considered.
If you're studying a particular spacetime in GR,
then you can do this trick of treating 1forms as vector fields
whether or not that spacetime is Minkowsi spacetime.
However, if you're analysing a broad class of spacetimes,
which many different metrics on the same manifold,
then this will not work out so well.
In the most extreme situation, study Lagrangian GR;
here, you have a manifold and a Lagrangian density
which is a function of the fields on the manifold *including the metric*.
Now the metric can be *any* nondegenerate symmetric bilinear form.
Then, to solve the system and discover *which* metrics are possible,
you apply the principle of stationary action which says that
the action (the integral of the Lagrangian density over the manifold)
is stationary to 1st order when you vary the fields *including the metric*.
If you ignored the metric in your notation (which is what happens
when you conflate a 1form with its corresponding vector field),
then you wouldn't be able to calculate this very well!

>I'm on much weaker ground when it comes to symplectic geometry, but doesn't a

>symplectic manifold require a nondegenerate symplectic strucre w. This pretty
>much takes the place of a metric, doesn't it? Could you define a reciprocal


>basis on a symplectic manifold?

Yes, I suppose you could get away with that, just as you described it.
One problem is that there are different conventions for w^{ij},
differing by a sign (ultimately because of the antisymmetry of w).
Another problem is what to do when there's both a metric and a symplectic form.
Which do you use to turn 1forms into vector fields?

In fact, since every (paracompact) manifold has some metric on it,
you could just pick one randomly and use this trick in any situation!
I hope you can see the danger here:
You'd be tempted to write things like A^i B_i
without knowing whether A and B are really 1forms or vector fields.
If they're both the same (say, both vector fields),
then A^i B_i really means g_ij A^i B^j,
where g is this random metric that we pulled out of thin air.
Since g has no actual physical meaning,
this quantity A^i B_i has no actual physical meaning,
even though you can't tell that from the notation.

Well, the same problem can happen when there's a physical metric around.
It's often useful to know if a certain quantity is metric independent.
If you can express it as A^i B_i *without* using the metric
to raise and lower indices, then you know it's metric independent.
OTOH, if A^i B_i really means g_ij A^i B^i,
then it's a fine physical quantity (since g is meaningful now),
but it's still a metric dependent physical quantity,
which might be something that's useful to know.

>Sorry if I'm being troublesome, but I always learn more by arguing a wrong
>point to death. By the time I realize I am wrong, then I have learned a lot
>in the process :) (on rare occasions I am actually right)

Well, that's how we learn :-).


-- Toby
to...@math.ucr.edu


Greg Egan

unread,
Oct 6, 2000, 3:00:00 AM10/6/00
to
In article <39DBE9AB...@uiuc.edu>, Eric Forgy <fo...@uiuc.edu> wrote:

[snip]

> It seems to me that no beauty is lost by avoiding the use of linear
> functionals. The only thing is that you need an exterior algebra on your
> tangent space.

I'm sure you could make this all work, if you really wanted to. And I'm
certainly all in favour of people realising that you can have exterior
products and Hodge duals on the tangent space.

I just can't see what clarity or economy you think you're gaining by
throwing away the dual space, at the cost of having to keep track of two
different kinds of objects in the tangent space (and even more kinds of
objects in tensor powers of the tangent space). At the very least, this
increases the risk of confusion about which equations rely on the value
(and even the existance) of a metric, and which don't. If I was working
with your system in polar coordinates (let alone genuinely curved
spacetime), I'd probably spend half my time doing computations that
amounted to needlessly multiplying the metric by its inverse.

But there's no disputing taste, and if you believe that this is just as
easy and elegant as working with differential forms and tensors with
honest contravariant indices, I'm not going to argue.

Eric Forgy

unread,
Oct 6, 2000, 3:00:00 AM10/6/00
to
Paul Arendt wrote:

> In article <39DBEF6B...@uiuc.edu>, Eric Forgy <fo...@uiuc.edu> wrote:
> >w(e^i,e_j) = delta^i_j
>
> ** TILT! ** Poor little w doesn't know how to do this yet. It
> can only eat things with lower indices, until you teach it to do
> otherwise.

hehe :)

Maybe I didn't make it clear enough but e^i IS a tangent vector. I know the
superscripts makes it look a lot like a 1-form, but that is the whole point of
me asking my questions in the first place. This book I have "Vector and Tensor
Analysis with Applications", Borisenko and Tarapov [BT] spend an entire book
talking about covariant, contravariant, and mixed tensors but they do not once
introduce a 1-form.

X = X^i e_i <- a tangent vector which they call a contravariant vector

X = X_i e^i <- a tangent vector which they call a covariant vector

So the same tangent vector X is refered to as contra or covariant depending on
which bases they are expressed in where

e^i = g^{ij} e_j (in the Riemann/Lorentz case)

or

e^i = w^{ij} e_j (in the symplectic case)

is a tangent vector.

This tangent vector is related to the 1-form basis element dx^i by

dx^i(e_j) = g(e^i,e_j) (in the Riemann/Lorentz case)

or

dx^i(e_j) = w(e^i,e_j) (in the symplectic case)

So don't let the superscript on e^i fool you. It is a tangent vector and so g,
as well as w, can handle it just fine.

w(e^i,e_k) = w(w^{ij} e_j,e_k) = w^{ij} w(e_j,e_k) = w^{ij} w_{jk} = delta^i_k

Kind of cool :)

> >w = w_{ij} e^i /\ e^j = w^{ij} e_i /\ e_j
>
> Hmmm... let's try to make sense of this. The left thing is
> a two-form. The middle thing is the expression of the two-form
> in terms of a basis; OK! But the right-hand thing is the expression
> of an antisymmetric (2,0) tensor (sum of bivectors) in terms of
> a basis. So for the equals sign on the right, we get "ERROR
> PARSING COMMAND."

Sorry, my notation is really horrible here. I should probably write

w* = w_{ij} e^i /\ e^j = w^{ij} e_i /\ e_j

so that w* is the bivector corresponding to the 2-form w.

w* is a bivector

w_{ij} e^i /\ e^j is a covariant bivector (in the language of [BT])

w^{ij} e_i /\ e_j is a contravariant bivector (in the language of [BT])

To see that they are all equal:

w_{ij} e^i /\ e^j

= w_{ij} (w^{ik} e_k) /\ (w^{jl} e_l)
= w_{ij} w^{ik} w^{jl} e_k /\ e_l
= delta_i^l w^{ik} e_k /\ e_l
= w^{lk} e_k /\ e_l
= w^{ji} e_i /\ e_j
= -w^{ij} e_i /\ e_j

Whoops! They are not equal! They differ by a sign. So I guess I need to rewrite

w* = 1/2 w_{ij} e^i /\ e^j = - 1/2 w^{ij} e_i /\ e_j

where I finally include the required factor of 1/2 (unless we restrict i < j)

So I am basically trying to write some basics of symplectic (Riemannian/Lorentz
too) geometry by avoiding forms. I understand it is a masochistic thing to do,
but I am arguing (and trying to learn in the process why I'm wrong) that it is
possible to study physics without forms and everything you used to do with forms
can be done with covariant vectors expressed in the reciprocal basis. The only
caveat is that you must have a metric for Riemannian/Lorentz geometry, or a
symplectic structure for symplectic geometry.

> >What if we define
> >
> >e^i = w^{ij} e_j,
> >
> >which is possible because w is nondenegerate.
>
> With your definitions given above, w^{ij} is ready to
> eat forms, not vectors (because that's what the {e_i}
> can do). So you want w_{ij} there instead. (Besides,
> w^{ij} is still undefined.)

I think it is more important to balance indices. e^i and e_j are both vectors
but one is a tangent vector e_j = d/dx^j, the other is a tangent vector e^i =
w^{ij} e_j.

If you wanted to write down a form you can by indentifying

dx^i(e_j) = w(e^i,e_j)

This serves to relate the 1-form dx^i and tangent vector e^i given w.

> This is, up to a sign convention which isn't set in stone
> anywhere, exactly what goes on in Hamiltonian mechanics. Feed
> a vector into a symplectic form, and out pops a one-form! But
> the one-form is always orthogonal to the original vector, because
> w is antisymmetric.

Fortunately, I spent a couple weeks recently and introduced myself to the
wonderful world of geometric algebras (from the Hestenes school), so I can say
that there is a way to feed a bivector a vector and get out a vector.

w* a bivector
x a vector

w(x,.) = w*.x
= (1/2 w^{ij} e_i /\ e_j) . (x^k e_k)
= 1/2 w^{ij} x^k (e_i /\ e_j).e_k
= 1/2 w^{ij} x^k [e_i w(e_j,e_k) - e_j w(e_i,e_k)]
= 1/2 w^{ij} x^k (w_{jk}e_i - w_{ik}e_j)
= 1/2 (delta^i_k x^k e_i + delta^j_k x^k e_j)
= 1/2 (x^i e_i + x^j e_j)
= x

Whoa! I have no idea what just happened :) I have a feeling I missed a sign and
it should be w*.x = -x or something.

> Soon you'll tire of writing all those indices, though.

No kidding :)

> >w(e^i,e_k) = w^{ij} w(e_j,e_k) = w^{ij} w_{jk} = delta^i_k
> >
> >VOILA!
> >
> >Does that work out? :)
>
> Hmmmm... you're trying to *define* the thing on the far left
> with this equation, I guess. But w^{ij} hasn't really been
> defined yet, so there's a problem with everything but the
> thing on the far right!

w^{ij} is defined given w_{ij} just as g^{ij} is defined given g_{ij}

w^{ij} w_{jk} = delta^i_k <- definition of w^{ij}

g^{ij} g_{jk} = delta^i_k <- definition of g^{ij}

since both g and w are nondegenerate, I think this is a valid definition.

> In short: w is ready to eat a vector and spit out a one-form
> orthogonal to the vector in a 1-1 onto way (in finite dimensions,
> at least). But without a metric, you've no way to make that
> one-form into a vector again, except by undoing the same
> operation.

Orthogonal in what sense? You mean x^* = w(x,.) and x^*(x) = 0?

In that case, maybe I wasn't too far off when I wrote w*.x = x (with the
possibility I made a sign error w*.x = -x) because then

w*.x is a vector and (w*.x).x = 0 because w(x,x) = 0,

which would still be true if w*.x = -x,

(w*.x).x = -x.x = -w*(x,x) = 0

> If you have in addition a "complex structure" on each tangent
> space, you can define something like a metric. A complex
> structure is a (1,1) nondegenerate tensor J that satisfies
>
> w(J a, J b) = w(a,b) for vectors a and b
> (it preserves the symplectic structure)
> and
> J^2 = -1 (the n x n identity matrix)
> (it acts like multiplication by i, but on a real vector space)

Nice! This is what I was grabbing for :) Now I have some homework to see if I
can use this complex structure to do away with forms for good :)

It's not that I don't like forms, but I'm just trying to convince myself that
they CAN be done away with as long as you have some nondegenerate structure
lying around that allows you to define a reciprocal basis.

> Then you can show that w(J a, b) is a symmetric (0,2) tensor.
> If you pick your J right, it'll be nondegenerate, too! Ain't
> that just like a metric? (In fact, if you've already *got*
> a metric, you can define J(a) = g(w(a,.),.) and all will be
> dandy.)
>
> Another trick if you've got a metric already: define the sum
> b = g + i w.
> Since g is symmetric, and w is antisymmetric, what can you
> say about b?

Cool! This is related to a question I asked in a post I just wrote responding to
Toby Bartels comment. I wish I had read this first :)

What can I say about b?

Re[b] = g is symmetric
Im[b] = w is antisymmetric

b is a 2-form

If x and y are parallel (wrt g)

b(x,y) = g(x,y)

If x and y are orthogonal (wrt g)

b(x,y) = i w(x,y)

This is making me think of Clifford algebra, but my brain is really stretched. I
just started teaching myself Clifford algebra 2 weeks ago :)

> Both of these tricks are in Nakahara, I think (since you would
> rather read that than the "phone book"). (Yes, disliking MTW
> is considered blasphemy, except by at least one of our
> moderators.)

Ok. I had only studied up through Riemannian geometry in Nakahara. I still have
a ways to go before I have the guts to delve into complex manifolds.

Thanks a lot for your comments! I'm starting to understand more clearly.

Eric

PS: After reading this again, I think I made some notational goofs. I said w
was a bivector and then I wrote things like w(x,y), which doesn't make sense.
I should have written w(x,y) where w is the 2-form corresponding to the
bivector w*. I tried to go back and replace w with w* where I had intended
it to be a bivector.


George Jones

unread,
Oct 9, 2000, 3:00:00 AM10/9/00
to
In message <39DE4FFE...@uiuc.edu>, Eric Forgy wrote:

> So I am basically trying to write some basics of symplectic (Riemannian/

>Lorentz too) geometry by avoiding forms. I understand it is a masochistic

>thing to do, but I am arguing (and trying to learn in the process why I'm
>wrong) that it is possible to study physics without forms and everything
>you used to do with forms can be done with covariant vectors expressed
>in the reciprocal basis. The only caveat is that you must have a metric
>for Riemannian/Lorentz geometry, or a symplectic structure for symplectic
geometry.

You must also have a basis! How do you avoid equations that are,
as John Baez likes to say, bristling with indices?

A more serious problem is the following. Suppose h is a map
between manifolds M and N, i.e., h : M -> N. If p is in M,
then it's possible to define a pushforward map from the tangent
space at p into the tangent space at h(p), but this map does
not necessarily give rise to map of vector fields on M into
vector fields on N. However, there is pullback a map from
cotangent of N into cotangent spaces of P, that does take
1-forms into 1-forms. How does one do this without the cotangent
spaces? This is how a change of variable is done in integration
theory. How does one do integration theory on manifolds?

> Fortunately, I spent a couple weeks recently and introduced myself to the
> wonderful world of geometric algebras (from the Hestenes school), so I can
> say that there is a way to feed a bivector a vector and get out a vector.

I think this answers the question: Why?
I was assuming 2 possible answers to this question.

1) You want to avoid a proliferation of spaces.

2) You want to avoid the abstraction of maps into the real
nubers.

If the answer had been 2), I was going to ask: What is a
tangent vector? What is the tensor product of 2 tangent
vectors?

The answers to both questions usually involve maps into
the real numbers.

The answer to the second question often (but not always)
involves the dual space so I am curious as to what your
answer is.

If your interested in Clifford (geometric) algebras, I
encourage you to look at works by other authors, as well as
those in the Hestenes school. You probably like seeing
the Mawell equation written as *one* Clifford algebra
equation.

Regards,
George


Johan Braennlund

unread,
Oct 9, 2000, 3:00:00 AM10/9/00
to
In article <39DD38CD...@uiuc.edu>,

Eric Forgy <fo...@uiuc.edu> wrote:
> John Baez wrote:
>
> > In article <39DBE9AB...@uiuc.edu>, Eric Forgy

> > <fo...@uiuc.edu> wrote:
> > >I've never heard of a variety of metrics. Is there such thing as
> > > a single manifold with two distinct metrics on it? What would that
> > > mean?

Certainly there is! Often there is no god-given way to measure
distances; to take an utterly trivial example, choose a metric g on your
manifold, multiply it by the number 57 and you have another one. It
sounds like you may put more into the word "manifold" than what's
customary - choosing a manifold means (modulo technicalities) that you
fix the topology of your space.

> >
> > Two different solutions of Einstein's equations of general
> > relativity, for example!
>
> Oh man. I hope I misunderstood you or I really have no conceptual
> understanding for this stuff.
>
>

> Take for example two coallescing black holes. There is just one
> solution/metric that describes this, right? It wouldn't make sense to
> talk about two distinct solutions/metrics for this binary system would
> it? The solution should be unique.

Under certain conditions, yes it's unique, but in that case you assume
much more than just a certain manifold: you assume Einstein's equations
to hold, with certain boundary conditions (asymptotic flatness or
whatever).

It's often the case that one is interested in more than one metric on a
manifold, not necessarily imposing Einstein's equations (this would be
analogous to "off-shell mechanics", if you're familiar with that term).
For instance, you may do perturbation theory from an exact solution and
then you have two metrics, one being the exact solution and the other
being the perturbed one. As another example, you may look at the
conformal geometry of the manifold (i.e. those geometrical properties
that are invariant if the metric is multiplied by a positive function),
and here there's an infinite number of metrics that you study.

Johan


Sent via Deja.com http://www.deja.com/
Before you buy.


fo...@my-deja.com

unread,
Oct 9, 2000, 3:00:00 AM10/9/00
to
Thank you to the moderator for giving me a second chance to fix my line
widths :)

Toby Bartels wrote:

> When I talked about the metric varying, I didn't mean varying from
> point to point in spacetime; rather, I meant varying in the sense that
> more than a single metric was being considered. If you're studying a
> particular spacetime in GR, then you can do this trick of treating
> 1forms as vector fields whether or not that spacetime is Minkowsi
> spacetime.

Ok. I see :)

> However, if you're analysing a broad class of spacetimes, which many
> different metrics on the same manifold, then this will not work out
> so well.

I still don't know exactly why anyone would want to do this, but it's
just my ignorance showing through.

> In the most extreme situation, study Lagrangian GR; here, you have a
> manifold and a Lagrangian density which is a function of the fields
> on the manifold *including the metric*. Now the metric can be *any*
> nondegenerate symmetric bilinear form. Then, to solve the system and
> discover *which* metrics are possible, you apply the principle of
> stationary action which says that the action (the integral of the
> Lagrangian density over the manifold) is stationary to 1st order
> when you vary the fields *including the metric*.

I would be tempted to turn the argument around and say that you are
still simply dealing with one metric, but you are varying that single
metric. I guess it is something like the difference between active and
passive transformations. On one hand, you can think of it as being a
space of all metrics and then the variation may be thought of as passing
from one such metric to another. On the other hand, I'd tend to think of
it like having a single metric that gets pushed and tugged on.

> >I'm on much weaker ground when it comes to symplectic geometry,
> >but doesn't a symplectic manifold require a nondegenerate symplectic
> >strucre w. This pretty much takes the place of a metric, doesn't it?
> >Could you define a reciprocal basis on a symplectic manifold?
>
> Yes, I suppose you could get away with that, just as you described it.
> One problem is that there are different conventions for w^{ij},
> differing by a sign (ultimately because of the antisymmetry of w).
> Another problem is what to do when there's both a metric and a
> symplectic form. Which do you use to turn 1forms into vector fields?

I'm grabbing for straws here, but I've tried to follow some threads
where you guys were talking and complex metrics. Something like any two
of the following three objects determine the third:

1.) complex metric
2.) real metric
3.) real symplectic structure

Something like that.

Any tensor T of rank 2 can be decomposed into symmetric and
antisymmetric parts, g and w, respectively. Could you combine w and g
into a single tensor with components

T_{ij} = g_{ij} + w_{ij}

(or perhaps it should be something like

T_{ij} = g_{ij} + Jw_{ij} with J^2 = -1

but I'm guessing wildly)

Then if you happened to have both a metric and a symplectic structure at
the same time, then couldn't you use some combination T of them to
identify your covariant and contravariant vectors? You'd need some
combination of g and w so that the combined tensor is also non
degenerate so that T_{ij} and T^{ij} are both defined in such a way that
T_{ij} T^{jk} = delta^i_k.

I guess what I am saying, is that a real metric and a real symplectic
structure determine a third structure, which I think was called a
complex metric. Then what I am suggesting is that you use this third
structure to define covariant and contravariant vectors.

> In fact, since every (paracompact) manifold has some metric on it,
> you could just pick one randomly and use this trick in any situation!

> I hope you can see the danger here: You'd be tempted to write things


> like A^i B_i without knowing whether A and B are really 1forms or
> vector fields.

I'd tend to stick to the idea that a metric defines lengths and angles
so that given any paracompact manifold, you can choose a metric. But
once you have chosen a metric you have specified the shape and rigid
structure that you are assigning to that manifold. Kind of what I am
arguing is that you would never confuse A_i with the component of a
1-form because there are no 1-forms! :) A_i would always be the
component of a tangent vector expressed in the reciprocal basis.

> If they're both the same (say, both vector fields), then A^i B_i
> really means g_ij A^i B^j, where g is this random metric that we
> pulled out of thin air. Since g has no actual physical meaning, this
> quantity A^i B_i has no actual physical meaning, even though you
> can't tell that from the notation.

Well, like I said, I would always give a metric a physical meaning as
defining lengths and angles. In this case A^i B_i = g_{ij} A^ B^j does
have physical meaning. If a metric didn't have physical meaning as
defining lengths and angles, I wouldn't call it a metric. I'd call it a
twice covariant symmetric, nongenerate tensor :) This is just a
definition thing though. I understand that the definition of a metric
doesn't necessarily require it to specify lengths and angles, which I
assume that is what you mean by "physical metric."

> Well, the same problem can happen when there's a physical metric
> around. It's often useful to know if a certain quantity is metric
> independent. If you can express it as A^i B_i *without* using the
> metric to raise and lower indices, then you know it's metric
> independent. OTOH, if A^i B_i really means g_ij A^i B^i, then it's a
> fine physical quantity (since g is meaningful now), but it's still a
> metric dependent physical quantity, which might be something that's
> useful to know.

Ok, but if A^i B_i is not metric dependent, then it must be basis
dependent. Why would we be interested in something that is basis
dependent? Or am I missing something again?

This is really great to be able to discuss this stuff. Thanks alot. I'm
in the electrical engineering department here and I'm kind of the lone
diff geo person in my group. Everyone else thinks I am a freak :)

Eric

George Jones

unread,
Oct 9, 2000, 3:00:00 AM10/9/00
to
In article <39E1DD6C...@uvi.edu>, I wrote wrote:

> However, there is pullback a map from
> cotangent of N into cotangent spaces of P, that does take
> 1-forms into 1-forms.

I was under time pressure when I typed this in, and thus was even
more incoherent than usual. I meant to write:

However, there are pullback maps from the cotangents spaces of N
into the cotangent spaces of M that do define a pullback that
takes 1-forms on M to 1-forms on N.

> If your interested ...

If you're interested ...

Regards,
George

fo...@my-deja.com

unread,
Oct 9, 2000, 10:34:02 PM10/9/00
to
In article <39E1DD6C...@uvi.edu>,
George Jones <gjo...@uvi.edu> wrote:

> A more serious problem is the following. Suppose h is a map
> between manifolds M and N, i.e., h : M -> N. If p is in M,
> then it's possible to define a pushforward map from the tangent
> space at p into the tangent space at h(p), but this map does
> not necessarily give rise to map of vector fields on M into

> vector fields on N. However, there is pullback a map from


> cotangent of N into cotangent spaces of P, that does take

> 1-forms into 1-forms. How does one do this without the cotangent
> spaces? This is how a change of variable is done in integration
> theory. How does one do integration theory on manifolds?

Let me take a stab at this.

Let h: M -> N be a map from a manifold M to a manifold N. Then tangent
vectors X,Y on M get pushed forward to h_*X, h_*Y on N. If N is equipped
with a metric g_N, then we can use h to pull back g_N to a metric g_M on
M (can we? I think so.)

g_M = h^* g_N

and

g_M(X,Y) = g_N(h_*X,h_*Y).

Then in my naive world where there are no 1-forms, all I have is
covariant and contravariant tangent vectors and a metric. It is the
contravariant tangent vectors that get pushed forward, but the covariant
tangent vectors must be pulled back.

X a contravariant tangent vector on M
A a covariant tangent vector on N

phi_*X a contravariant tangent vector on N
phi^*A a covariant tangent vector on M

with

g_M(phi^*A,X) = g_N(A,phi_*X)

I guess this means that A = phi_*phi^*A :)

> I think this answers the question: Why?
> I was assuming 2 possible answers to this question.
>
> 1) You want to avoid a proliferation of spaces.
>
> 2) You want to avoid the abstraction of maps into the real
> nubers.

If you are asking for my motivation, then the answer is really just like
answering, "Why do people climb Mount Everist?" Because it's there :) I
just "think" that it can be done (doing away with forms) and since I'm
just learning this stuff, I'm trying to see whether my instincts are
working or not. I guess another honest motivation is that I can "see"
what a tangent vector is, but I cannot see what a form is and the
typical pictorial attempt given in e.g. MTW leaves something to be
desired in my opinion.

The contravariant basis e_i = d/dx^i are gotten from curves and can
easily be draw as arrows. When you have a metric, the covariant basis
e^i = g^{ij} e_j are also tangent vectors and may also be represented by
arrows, and their meaning is pretty clear: e^i is an arrow
"perpendicular" to all the other e_j for j != i. I think this is very
nice and visualizable. I do not like the idea of picturing forms as
extended surfaces where the value of the form on the vector is the
number of surfaces pierced because this does injustice to the necessary
locality that the whole idea of a tangent space should be trying to
instill. A tangent vector is nothing but an element of some vector
space, that might be easily visulaized or not, that is associated with a
"point" on a manifold, and I think the idea of piercing surfaces is
really nonlocal and takes away from the nice tangent space idea.

About integration on manifolds...

I don't see any problem with integrating p-covariant tangent vectors.
All you need is an exterior algebra on you tangent space, then a
covariant bivector may be written as

B = 1/2 B_{ij} e^i /\ e^j

Integrating a 2-form

B^* = 1/2 B_{ij} dx^i /\ dx^j

corresponding to the covariant bivector B is usually done by defining

int_S B^*
= int_S 1/2 B_{ij} dx^i /\ dx^j
:= int_S 1/2 B_{ij} dx^i dx^j

where the last equality is just regular integration. This identification
is somewhat arbitrary and you could just as well define integration of a
covariant bivector B as

int_S B
= int_S 1/2 B_{ij} e^i /\ e^j
:= int_S 1/2 B_{ij} dx^i dx^j

where the last equality is regular integration and the dx^i,dx^j are NOT
1-forms.

Thank you for your questions. I hope that my answers are coherent
enough. I really like thinking about this stuff and the more I do, the
more I am becoming convinced that forms are really not necessary as long
as you have a metric (preferrably a physical one defining lengths and
angles) lying around. If you have any further reasons why forms are
REALLY necessary, I'd love to hear about it. Here I think I showed a way
in which we can deal with:

1.) push forward of contravariant tangent vectors
2.) pull backs of covariant tangent vectors
3.) integration of covariant p-vectors on a manifold
4.) change of coordinates, I hope is obvious

All this without using a differential form once. Is there anything else
that would really make forms necessary?

> If your interested in Clifford (geometric) algebras, I
> encourage you to look at works by other authors, as well as
> those in the Hestenes school. You probably like seeing
> the Mawell equation written as *one* Clifford algebra
> equation.

Yes, but what I am interested in is waves and fields in inhomogeneous
media. Some people may scoff, but I think it is possible to study the
differential geometry of inhomogeneous (even anisotropic) media. In
fact, this is the only reason I am studying it at all. I think there are
a lot of subtle issues when you have inhomogeneous, possibly even
dispersive, media, that can be studied using the tools of differential
geometry.

The unification to *one* equation is only possible in free space, if I
remember correctly.

Thanks again,
Eric


George Jones

unread,
Oct 11, 2000, 3:00:00 AM10/11/00
to

> Let h: M -> N be a map from a manifold M to a manifold N. Then tangent
> vectors X,Y on M get pushed forward to h_*X, h_*Y on N. If N is equipped
> with a metric g_N, then we can use h to pull back g_N to a metric g_M on
> M (can we? I think so.)

g_N can be pulled back to M because at each element of N, g_N is
an element of (cotangent space) (x) (cotangent space).

> g_M = h^* g_N
>
> and
>
> g_M(X,Y) = g_N(h_*X,h_*Y).

What if M already has a metric?

> Then in my naive world where there are no 1-forms, all I have is
> covariant and contravariant tangent vectors and a metric. It is the
> contravariant tangent vectors that get pushed forward, but the covariant
> tangent vectors must be pulled back.

Vector fields on M can't necessarily be pushed forward to vector
fields on N. 1-forms on N can always be pulled back to 1-forms on
M.

I guess you define the distinction between contravariant and
covariant by transformation properties. So, in your world, two
vectors in the same space can be considered equal mathematically,
but not equal because they transform differently. All the indices
and associations make things damn ugly. In a basis-free approach,
transformation properties are derived!

> The contravariant basis e_i = d/dx^i are gotten from curves and can
> easily be draw as arrows. When you have a metric, the covariant basis
> e^i = g^{ij} e_j are also tangent vectors and may also be represented by
> arrows, and their meaning is pretty clear: e^i is an arrow
> "perpendicular" to all the other e_j for j != i. I think this is very
> nice and visualizable. I do not like the idea of picturing forms as
> extended surfaces where the value of the form on the vector is the
> number of surfaces pierced because this does injustice to the necessary
> locality that the whole idea of a tangent space should be trying to
> instill.

Tangent vectors should be visualized as arrows, but this won't
do for a mathematical definition because a manifold M may not
be embedded in some higher dimensional space, and thus there is
nowhere for the arrows to stick out into. A definition of tangent
vector that is intrinsic to the manifold M is needed, and this
usually involves mappings of function spaces to R. A definition
involving equivalence classes of curves may also be used.

The tensor product of finite-dimensional vector spaces V and W
is often defined as the space of real-valued bilinear maps on
V*xW*. Those darn dual spaces again! They can be avoided by
defining a basis for V (x) W to be B_V x B_W where B_V and B_W
are bases for V and W. Essentially the tensor product space is
the free vector space on B_V x B_W. There is also a more
technical construction that: 1) avoids dual spaces; 2) works
for infinite-dimensional spaces as well.

I was curious about how you would define tensor products
without dual spaces, but, unless I missed something (quite
possible), you didn't respond to this question.

I have never liked the surface-piercing pictorial motivation
for forms either, but I found that the algbraic approach
gradually grew on me.

> If you have any further reasons why forms are
> REALLY necessary, I'd love to hear about it.

I never said that you're approach is impossible, I was merely
try to point out some of the myriad of things that need to be
dealt with. I've known about using reciprocal vectors in
Clifford algebra stuff for years.

> Yes, but what I am interested in is waves and fields in inhomogeneous
> media. Some people may scoff, but I think it is possible to study the
> differential geometry of inhomogeneous (even anisotropic) media. In
> fact, this is the only reason I am studying it at all. I think there are
> a lot of subtle issues when you have inhomogeneous, possibly even
> dispersive, media, that can be studied using the tools of differential
> geometry.
>
> The unification to *one* equation is only possible in free space, if I
> remember correctly.

In flat spacetime, use an inertial coordinate system to
define the operator D = e^i d/dx^i. This definition is independent
of which inertial system was used. Then

D F = J

is *the* Maxwell equation (including sources) in Cl(1,3), the
Clifford algebra for spacetime. Something similar can be done
in the smaller algebra Cl(3), which is isomorphic to the complex
quaternions.

Reciprocal vectors are used also when using Clifford algebras
for calculations in general relativity. For example,
Define the the Ricci "4-vectors" by R_i = e^j R_ij. Then
Einstein's equations are

R_i - (1/2) R e_i = 8 pi T_i ,

where the "4-vectors" T_i are defined similarly. Curvature
bivectors, and connections can also be defined. Again, all
of this can be done in Cl(1,3) or the smaller algebra Cl(3).

Clifford algebra people can be quite dogmatic in their views,
and I wrongly assumed these considerations were motivating you.

For a book rich in physical applications (but somewhat low in
mathematical rigor), try "Clifford Algebra: A Computational Tool
for Physicists" by John Snygg. He really does a lot. It's good
to look also at some of the real math books on Clifford algebras,
especially for spinors.

And if you're really interested in differential geometry, I
encourage you to learn the standard way as well. Who knows,
someday you may want to communicate with somebody who only knows
it the standard way!

I don't know what you're getting at when talking about using
differential geometry for inhomogeneous media, but it sounds
interesting.

Regards,
George

Toby Bartels

unread,
Oct 12, 2000, 3:00:00 AM10/12/00
to
Eric wrote:

>Toby Bartels wrote:

>>In the most extreme situation, study Lagrangian GR; here, you have a
>>manifold and a Lagrangian density which is a function of the fields
>>on the manifold *including the metric*. Now the metric can be *any*
>>nondegenerate symmetric bilinear form. Then, to solve the system and
>>discover *which* metrics are possible, you apply the principle of
>>stationary action which says that the action (the integral of the
>>Lagrangian density over the manifold) is stationary to 1st order
>>when you vary the fields *including the metric*.

>I would be tempted to turn the argument around and say that you are
>still simply dealing with one metric, but you are varying that single
>metric.

Call it what you like.
The fact remains that if you don't write the metric down,
as a 2nd rank covariant tensor,
then you won't be able to do the Lagrangian analysis.

>I'm grabbing for straws here, but I've tried to follow some threads
>where you guys were talking and complex metrics. Something like any two
>of the following three objects determine the third:

>1.) complex metric
>2.) real metric
>3.) real symplectic structure

It's complex structure, not complex metric.
Anyway, if you have any 2 of these, and these 2 are compatible,
then you automatically have the other one.
A complex metric, then, is a combination of all 3.

>Any tensor T of rank 2 can be decomposed into symmetric and
>antisymmetric parts, g and w, respectively.

In general, these g and w won't be compatible.
(To see if they're compatible,
use them alternately to turn a vector into a 1form and back again, twice.
Depending on your convention for w, you should get + or - the original vector.)

>I guess what I am saying, is that a real metric and a real symplectic
>structure determine a third structure, which I think was called a
>complex metric. Then what I am suggesting is that you use this third
>structure to define covariant and contravariant vectors.

You could use the complex metric to do this --
but only if g and w are compatible.

>>In fact, since every (paracompact) manifold has some metric on it,
>>you could just pick one randomly and use this trick in any situation!

>>If they're both the same (say, both vector fields), then A^i B_i


>>really means g_ij A^i B^j, where g is this random metric that we
>>pulled out of thin air. Since g has no actual physical meaning, this
>>quantity A^i B_i has no actual physical meaning, even though you
>>can't tell that from the notation.

>Well, like I said, I would always give a metric a physical meaning as
>defining lengths and angles. In this case A^i B_i = g_{ij} A^ B^j does
>have physical meaning. If a metric didn't have physical meaning as
>defining lengths and angles, I wouldn't call it a metric. I'd call it a

>twice covariant symmetric, nondegenerate tensor :)

Fine, but in that case, note every paracompact manifold has a metric.
All you're guaranteed is a twice covariant symmetric, nondegenerate tensor.

>Ok, but if A^i B_i is not metric dependent, then it must be basis
>dependent. Why would we be interested in something that is basis
>dependent? Or am I missing something again?

If A is a vector field and B is a 1 form,
then A^i B_i is neither metric dependent nor basis dependent.


-- Toby
to...@math.ucr.edu


Paul Arendt

unread,
Oct 13, 2000, 3:00:00 AM10/13/00
to
In article <39DE4FFE...@uiuc.edu>, Eric Forgy <fo...@uiuc.edu> wrote:
>Paul Arendt wrote:

>>Eric Forgy wrote:
>> >w(e^i,e_j) = delta^i_j
>>
>> ** TILT! ** Poor little w doesn't know how to do this yet. It
>> can only eat things with lower indices, until you teach it to do
>> otherwise.
>
>Maybe I didn't make it clear enough but e^i IS a tangent vector. I know the
>superscripts makes it look a lot like a 1-form, but that is the whole point of
>me asking my questions in the first place. This book I have "Vector and Tensor
>Analysis with Applications", Borisenko and Tarapov [BT] spend an entire book
>talking about covariant, contravariant, and mixed tensors but they do not once
>introduce a 1-form.

Well, I'm not sure what you mean by this! If you've got a (0,1) type
tensor, you've got a one-form. It's like saying that there are no
elephants at the zoo; only animals.

>X = X^i e_i <- a tangent vector which they call a contravariant vector
>
>X = X_i e^i <- a tangent vector which they call a covariant vector

Calling both of these a "tangent" vector really doesn't add much beyond
what calling them just "vectors" would. It's easy to *write* these
things with either up or down indices, but

***** WITHOUT A METRIC, YOU'VE NO WAY OF LEGITIMATELY *****
***** GIVING BOTH OF THESE OBJECTS THE COMMON NAME, X. *****

Okay... before I dive into the sea of indices you've provided
below, let me make a few remarks. A symplectic structure *does*
let you define a metric, but it lets you define lots of metrics!
And the method you follow will give a *different* result if a
different coordinate system is used, which is a bad thing.
It's like saying an otherwise featureless manifold lets you define
a metric -- sure, but there's still a lot of choice involved.

In particular, the method I think you're trying to use below
(to feed w two basis vectors, e_i and e_j, to somehow get
delta^i_j out), will depend on the coordinate system chosen.
It's true that there are some preferred coordinates for
symplectic structures. You can say, "ooooh -- w looks pretty
in these coordinates!" (This is called Darboux' Theorem, if
you want the jargon.) "Let's define a complex structure J
using these coordinates in the canonical way, and use it to
define a metric, with the recipe g(a,b) = w(Ja,b)."

For example, suppose your symplectic manifold is the plane, with
coordinates (q,p), and w in the canonical form dp ^ dq.
Then you can redefine coordinates Q = 2q, P = p/2, and w will still
be in the canonical form dP ^ dQ. (The set of all such
transformations are just the "canonical transformations" of
classical mechanics, or "symplectomorphisms" of
differential geometry, for your jargon amusement.)

However, the metric you're trying to define in the coordinates
(p,q) will be DIFFERENT than the metric you define in the
coordinates (P,Q). "Horizontal" and "vertical" distances will
change by a factor of 1/2 and 2, respectively. In more than
2 dimensions, many other choices can be made. (The
technobabble reason for this is that O(2n), the group of linear
transformations preserving a given metric, is a *proper*
subgroup of Sp(2n), the group of linear transformations
preserving a given symplectic form. There are symplectic-
preserving transformations, like the "shear" one given above,
that do not preserve the metric.)

Several things will let you define a metric in an umambiguous way,
hoever. One is a metric on just a subset consisting of one of
each pair of canonical coordinates (called a "Lagrangian
Subspace" at a point, or a "Lagrangian Submanifold" globally).
That, plus w, is enough to get a metric on the entire space.
(Choosing a unit of length for q, for example, determines a
unit of length on p, and invalidates any shear transformations.)
Another is a complex structure J, defined in a previous post.
Do you see how a complex structure keeps you from performing
any shear transformations in the 2-D case?

[Snip some index-laden formulae which all seem to assume that
there is a natural way of raising indices in such a way as
to satisfy e^i e_j = delta^i_j...]

>...I am arguing (and trying to learn in the process why I'm wrong) that


>it is possible to study physics without forms and everything you used
>to do with forms can be done with covariant vectors expressed in the
>reciprocal basis.

Sure, that's true. I'm sure plenty of folks here have anecdotes
about studying physics without forms "way back in the day..." :-)
It's also possible to get from Fairbanks to Cape Horn by walking.

>The only
>caveat is that you must have a metric for Riemannian/Lorentz geometry, or a
>symplectic structure for symplectic geometry.

Well, you need whatever structure is deemed necessary by the physics
you're trying to do. If you only want to do calculus, you just need
a coordinate system.

>> >What if we define
>> >
>> >e^i = w^{ij} e_j,
>>

>> With your definitions given above, w^{ij} is ready to
>> eat forms, not vectors (because that's what the {e_i}
>> can do).
>

>I think it is more important to balance indices.

There's some notational confusion going on here. I'm sure
that it's due to calling the e_j basis vectors (like x-hat)
in some places, and calling them components of vectors
themselves in other places (which have been expanded in
some other basis that's not labeled by {e_i}'s). Your
indices will balance if you're consistent about it, however.

>Fortunately, I spent a couple weeks recently and introduced myself to the
>wonderful world of geometric algebras (from the Hestenes school), so I can
>say that there is a way to feed a bivector a vector and get out a vector.
>
>w* a bivector
>x a vector
>
>w(x,.)

is a two-form eating a vector, giving a one-form.

> [...] = w*.x


>= (1/2 w^{ij} e_i /\ e_j) . (x^k e_k)
>= 1/2 w^{ij} x^k (e_i /\ e_j).e_k
>= 1/2 w^{ij} x^k [e_i w(e_j,e_k) - e_j w(e_i,e_k)]
>= 1/2 w^{ij} x^k (w_{jk}e_i - w_{ik}e_j)
>= 1/2 (delta^i_k x^k e_i + delta^j_k x^k e_j)
>= 1/2 (x^i e_i + x^j e_j)
>= x
>
>Whoa! I have no idea what just happened :)

I think the magic was in the step assuming that
w^{ij} w_{jk} = delta^i_k .
You've assumed that there's some way of raising and
lowering indices on w. The method you proposed for doing
so above depended upon a choice of coordinate system, though.

>w^{ij} is defined given w_{ij} just as g^{ij} is defined given g_{ij}
>
>w^{ij} w_{jk} = delta^i_k <- definition of w^{ij}

... and I think this must depend on the coordinates, too. But I
haven't thought about it too hard.

>> In short: w is ready to eat a vector and spit out a one-form
>> orthogonal to the vector in a 1-1 onto way (in finite dimensions,
>> at least).

>Orthogonal in what sense? You mean x^* = w(x,.) and x^*(x) = 0?

Yes, precisely because w(x,x) = 0!

>> Another trick if you've got a metric already: define the sum
>> b = g + i w.
>> Since g is symmetric, and w is antisymmetric, what can you
>> say about b?
>

>Re[b] = g is symmetric
>Im[b] = w is antisymmetric
>
>b is a 2-form
>
>If x and y are parallel (wrt g)
>
>b(x,y) = g(x,y)
>
>If x and y are orthogonal (wrt g)
>
>b(x,y) = i w(x,y)

Yes, but think about b as a matrix. What type of matrix has symmetric
real part, and antisymmetric imaginary part? (You actually already
gleaned what b can do in your reply to Toby's post, I think.)


squ...@my-deja.com

unread,
Oct 14, 2000, 3:00:00 AM10/14/00
to
I tried to follow your discussion, and to you seem to be arguing over
whether it's possible to define a corrsepondence between vectors and
covectors on a symplectic manifold - i.e. using a non-degenerate anti-
symmetric form rather than a metric. I must note it IS possible, and
may be done in the following way:

Given a vector v, the covector v' is defined by v'(u) = w(v, u). Now,
it may be verified this function is one-to-one as w is non-degenerate,
so this provides an isomorphism between the tangent and cotangent
spaces. There is only one exotic phenomenon here - when you construct w
on the cotangent space using this isomorphism, and try to construct a
new isomorphism, based on this new w, you get not the inverse of the
former function, but the minus of this inverse (!). We discussed this
with John Baez recently. There's no need to actually introduce a metric.

Best regards, squark.


fo...@my-deja.com

unread,
Oct 14, 2000, 3:00:00 AM10/14/00
to
In article <8s5d8h$2g20$1...@newshost.nmt.edu>,

par...@nmt.edu (Paul Arendt) wrote:
> In article <39DE4FFE...@uiuc.edu>, Eric Forgy <fo...@uiuc.edu>
wrote:

> >X = X^i e_i <- a tangent vector which they call a contravariant


> > vector
> >
> >X = X_i e^i <- a tangent vector which they call a covariant vector
>
> Calling both of these a "tangent" vector really doesn't add much
> beyond what calling them just "vectors" would. It's easy to *write*
> these things with either up or down indices, but
>
> ***** WITHOUT A METRIC, YOU'VE NO WAY OF LEGITIMATELY *****
> ***** GIVING BOTH OF THESE OBJECTS THE COMMON NAME, X. *****

Exactly! They ARE both tangent vector because in Borisenko and Tarapov
they HAVE a metric and e_i IS a tangent vector because it is defined as

e_i = d/dx^i

as usual, and e^i is ALSO a tangent vector because it is defined by

e^i = g^{ij} e_j

i.e. a linear combination of tangent vectors. This definition is only
possible if you have a metric (obviously because it is defined in terms
of g^{ij}). This is fine because everything I said and everything I
claimed was assuming we DID have a metric.

Given a tangent vector X = X^i e_i and a metric g, you can define the
1-form

X^* = g(X,.) = X_i dx^i,

where

X_i = g_{ij} X^j. So by looking at this expression alone you can not
really conclude whether or not X_i is a component of a tangent vector
written using the reciprocal basis or if it is a component of a 1-form
because

X
= X^i e_i
= X^i (g_{ij} e^j)
= X_j e^j,

where

X_j = g_{ij} X^i. The SAME component as the 1-form. All this is really
saying is that

dx^i(e_j) = g(e^i,e_j) = delta^i_j

Then this really boils down to

X^*(Y) = g(X,Y),

where X^* = g(X,.) is a 1-form. After a couple posts I realized that
this is nothing but an expression of Riesz representation theorem!

> In particular, the method I think you're trying to use below
> (to feed w two basis vectors, e_i and e_j, to somehow get
> delta^i_j out), will depend on the coordinate system chosen.

Well, it would depend on the coordinate system chosen if I was feeding
two basis vectors e_i and e_j, but I'm not. I'm feeding one basis vector
e^i and another e_j, where e^i is defined in terms of w, so the end
result does not depend on which coordinates you choose. Just like
defining g(e^i,e_j) = delta^i_j doesn't depend on the coordinates you
choose because

g(e^i,e_k)
= g(g^{ij} e_j,e_k)
= g^{ij} g(e_j,e_k)
= g^{ij} g_{jk}
= delta^i_k

and the last step

g^{ij} g_{jk} = delta^i_k

doesn't depend on coordinates.

> [Snip some index-laden formulae which all seem to assume that
> there is a natural way of raising indices in such a way as
> to satisfy e^i e_j = delta^i_j...]

Of course all of these assumed I could raise and lower indices because
e^i is DEFINED as e^i = g^{ij} e_j or (e^i = w^{ij} e_j) and everything
I said assumed we had a nondegenerate g or w :)

> >w* a bivector
> >x a vector
> >
> >w(x,.)
> is a two-form eating a vector, giving a one-form.
>
> > [...] = w*.x
> >= (1/2 w^{ij} e_i /\ e_j) . (x^k e_k)
> >= 1/2 w^{ij} x^k (e_i /\ e_j).e_k
> >= 1/2 w^{ij} x^k [e_i w(e_j,e_k) - e_j w(e_i,e_k)]
> >= 1/2 w^{ij} x^k (w_{jk}e_i - w_{ik}e_j)
> >= 1/2 (delta^i_k x^k e_i + delta^j_k x^k e_j)
> >= 1/2 (x^i e_i + x^j e_j)
> >= x
> >
> >Whoa! I have no idea what just happened :)
>
> I think the magic was in the step assuming that
> w^{ij} w_{jk} = delta^i_k .
> You've assumed that there's some way of raising and
> lowering indices on w. The method you proposed for doing
> so above depended upon a choice of coordinate system, though.

Argh. I really do not see how this depended on a coordinate system.

e^i is a reciprocal basis defined by a given basis of tangent vectors
e_i and a "nondegenerate" 2-form w. The fact that it is a 2-form means
that it is independent of coordinates although the expression of w in
components will of course depend on coordinates. Still w does not. The
fact that it is nondegenerate means that the inverse of the matrix
formed by the components w_{ij}, which has elements w^{ij}, is defined.

So because w^{ij} are defined and because e^i = w^{ij} e_j, then
everything I said should be independnet of coordinates.

>
> >w^{ij} is defined given w_{ij} just as g^{ij} is defined given g_{ij}
> >
> >w^{ij} w_{jk} = delta^i_k <- definition of w^{ij}
>
> ... and I think this must depend on the coordinates, too. But I
> haven't thought about it too hard.

If this depends on coordinates then I really need to go back and start
all over :)

Under a change of coordinates:

(letting d = \partial)

w^{ij} = dx^i/dx^i' dx^j/dx^j' w^{i'j'}

w_{jk} = dx^j"/dx^j dx^k'/dx^k w_{j"k'}

Then

w^{ij} w_{jk}
= dx^i/dx^i' dx^j/dx^j' dx^j"/dx^j dx^k'/dx^k w^{i'j'} w_{j"k'}
= dx^i/dx^i' dx^k'/dx^k w^{i'j'} w_{j'k'},

where I used

dx^j/dx^j' dx^j"/dx^j = delta_j'^j".

Also

delta^i_k = dx^i/dx^i' dx^k'/dx^k delta^i'_k'

So

w^{ij} w_{jk} = delta^i_k

and

w^{i'j'} w_{j'k'} = delta^i'_k'.

I think this is a standard proof, unless I mixed something up.

> >> Another trick if you've got a metric already: define the sum
> >> b = g + i w.
> >> Since g is symmetric, and w is antisymmetric, what can you
> >> say about b?
> >
> >Re[b] = g is symmetric
> >Im[b] = w is antisymmetric
> >
> >b is a 2-form
> >
> >If x and y are parallel (wrt g)
> >
> >b(x,y) = g(x,y)
> >
> >If x and y are orthogonal (wrt g)
> >
> >b(x,y) = i w(x,y)
>
> Yes, but think about b as a matrix. What type of matrix has symmetric
> real part, and antisymmetric imaginary part? (You actually already
> gleaned what b can do in your reply to Toby's post, I think.)

You mean Hermitian? I tried to read a bit of Nakahara (not much) and it
sounds like you might be getting at a Kahler manifold or something with
a complex metric, but that is over my head. At least for the moment :)

Thanks again for making me think :)

I am fairly convinced now that if you have a (preferrably physical)
metric g or symplectic structure w (or both) that you do not need to use
forms at all. Not only do you not need them, the computation are just as
simple and as elegant as the expressions using forms. I was very happy
with a derivation I presented in a previous post about electromagnetism,
but I can show it again.

Given a "particular" physical metric g on a manifold, which in my
opinion is just like fixing a particular structure for my manifold, then
we can define tangent vectors in the usual way. In a coordinate patch,
we can define a basis

e_i = d/dx^i.

At each point in the coordinate patch, we can also define

e^i = g^{ij} e_j.

Then to get electromagnetics in an elegant way without using forms, we
need an exterior algebra and an exterior derivative. Since we have a
basis (in a coordinate patch) e^i, then we can define p-vectors spanned
by elements of the form

e^i1 /\ e^i2 /\ ... /\ e^ip.

These are covariant p-vectors that have all the elegance and simplicity
of forms as long as we have a given metric.

The exterior derivative can then be defined to be a map from p-vectors
to (p+1)-vectors defined by

d(a e^i1 /\ ... /\ e^ip) = (da) /\ e^i1 /\ ... /\ e^ip

where a is a function and

da = da/dx^i e^i.

Then it is easy to show that d^2 = 0 and

d(a /\ b) = da /\ b + (-1)^p a /\ db.

Then, given a covariant vector A = A_i e^i, then defined

F
= dA
= (dA_j) /\ e^j
= dA_j/dx^i e^i /\ e^j
= 1/2 (dA_j/x^i - dA_i/dx^j) e^i /\ e^j
= 1/2 F_{ij} e^i /\ e^j,

where

F_{ij} = dA_j/dx^i - dA_i/dx^j.

Then

dF = 0.

Then you can define the Hodge star for covariant p-vectors. First you
need the "volume vector"

W = sqrt(|g|) e^i1 /\ ... /\ e^in

It requires a bit of work, but I'm sure it could be done.

Then you can get

delF = j.

In another post I already described how to define integration of
covariant p-vectors, so everything should be hunky dorey. I think this
is just as elegant as using forms and is not really something to scoff
at as if it is something people did "before we knew better" :)

There are even certain advantages for doing it this way as opposed to
using forms because tangent vectors are easy to visualize whereas forms
require a stretch and I already stated my opinion about why you should
not "visualize" forms as contours being "pierced" by arrows (it's a
horrible nonlocal idea).

In fact I think that defining

da = da/dx^i e^i

as a covariant tangent vector is precisely just the gradient of a
function.

da
= da/dx^i e^i
= da/dx^i g^{ij} e_j
= g^{ij} da/dx^i e_j
= grad(a)

I understand there are instances where you study a manifold that might
not have a metric (or symplectic structure), in which case this whole
argument is moot because you cannot define the reciprocal basis.
However, if you do have a metric or symplectic structure, I think there
might be certain advantages for not using forms.

Eric

fo...@my-deja.com

unread,
Oct 14, 2000, 3:00:00 AM10/14/00
to
In article <8s5d8h$2g20$1...@newshost.nmt.edu>,
par...@nmt.edu (Paul Arendt) wrote:

> In article <39DE4FFE...@uiuc.edu>, Eric Forgy <fo...@uiuc.edu>
wrote:

> >X = X^i e_i <- a tangent vector which they call a contravariant


> > vector
> >
> >X = X_i e^i <- a tangent vector which they call a covariant vector

> Calling both of these a "tangent" vector really doesn't add much
> beyond what calling them just "vectors" would. It's easy to *write*
> these things with either up or down indices, but
>
> ***** WITHOUT A METRIC, YOU'VE NO WAY OF LEGITIMATELY *****
> ***** GIVING BOTH OF THESE OBJECTS THE COMMON NAME, X. *****

Exactly! They ARE both tangent vector because in Borisenko and Tarapov


they HAVE a metric and e_i IS a tangent vector because it is defined as

e_i = d/dx^i

as usual, and e^i is ALSO a tangent vector because it is defined by

e^i = g^{ij} e_j

i.e. a linear combination of tangent vectors. This definition is only


possible if you have a metric (obviously because it is defined in terms
of g^{ij}). This is fine because everything I said and everything I
claimed was assuming we DID have a metric.

Given a tangent vector X = X^i e_i and a metric g, you can define the
1-form

X^* = g(X,.) = X_i dx^i,

where

X_i = g_{ij} X^j. So by looking at this expression alone you can not
really conclude whether or not X_i is a component of a tangent vector
written using the reciprocal basis or if it is a component of a 1-form
because

X
= X^i e_i
= X^i (g_{ij} e^j)
= X_j e^j,

where

X_j = g_{ij} X^i. The SAME component as the 1-form. All this is really
saying is that

dx^i(e_j) = g(e^i,e_j) = delta^i_j

Then this really boils down to

X^*(Y) = g(X,Y),

where X^* = g(X,.) is a 1-form. After a couple posts I realized that
this is nothing but an expression of Riesz representation theorem!

> In particular, the method I think you're trying to use below


> (to feed w two basis vectors, e_i and e_j, to somehow get
> delta^i_j out), will depend on the coordinate system chosen.

Well, it would depend on the coordinate system chosen if I was feeding


two basis vectors e_i and e_j, but I'm not. I'm feeding one basis vector
e^i and another e_j, where e^i is defined in terms of w, so the end
result does not depend on which coordinates you choose. Just like
defining g(e^i,e_j) = delta^i_j doesn't depend on the coordinates you
choose because

g(e^i,e_k)
= g(g^{ij} e_j,e_k)
= g^{ij} g(e_j,e_k)

= g^{ij} g_{jk}
= delta^i_k

and the last step

g^{ij} g_{jk} = delta^i_k

doesn't depend on coordinates.

> [Snip some index-laden formulae which all seem to assume that


> there is a natural way of raising indices in such a way as
> to satisfy e^i e_j = delta^i_j...]

Of course all of these assumed I could raise and lower indices because


e^i is DEFINED as e^i = g^{ij} e_j or (e^i = w^{ij} e_j) and everything
I said assumed we had a nondegenerate g or w :)

> >w* a bivector


> >x a vector
> >
> >w(x,.)
> is a two-form eating a vector, giving a one-form.
>
> > [...] = w*.x
> >= (1/2 w^{ij} e_i /\ e_j) . (x^k e_k)
> >= 1/2 w^{ij} x^k (e_i /\ e_j).e_k
> >= 1/2 w^{ij} x^k [e_i w(e_j,e_k) - e_j w(e_i,e_k)]
> >= 1/2 w^{ij} x^k (w_{jk}e_i - w_{ik}e_j)
> >= 1/2 (delta^i_k x^k e_i + delta^j_k x^k e_j)
> >= 1/2 (x^i e_i + x^j e_j)
> >= x
> >
> >Whoa! I have no idea what just happened :)
>
> I think the magic was in the step assuming that
> w^{ij} w_{jk} = delta^i_k .
> You've assumed that there's some way of raising and
> lowering indices on w. The method you proposed for doing
> so above depended upon a choice of coordinate system, though.

Argh. I really do not see how this depended on a coordinate system.

e^i is a reciprocal basis defined by a given basis of tangent vectors
e_i and a "nondegenerate" 2-form w. The fact that it is a 2-form means
that it is independent of coordinates although the expression of w in
components will of course depend on coordinates. Still w does not. The
fact that it is nondegenerate means that the inverse of the matrix
formed by the components w_{ij}, which has elements w^{ij}, is defined.

So because w^{ij} are defined and because e^i = w^{ij} e_j, then
everything I said should be independnet of coordinates.

>


> >w^{ij} is defined given w_{ij} just as g^{ij} is defined given g_{ij}
> >
> >w^{ij} w_{jk} = delta^i_k <- definition of w^{ij}
>
> ... and I think this must depend on the coordinates, too. But I
> haven't thought about it too hard.

If this depends on coordinates then I really need to go back and start
all over :)

Under a change of coordinates:

(letting d = \partial)

w^{ij} = dx^i/dx^i' dx^j/dx^j' w^{i'j'}

w_{jk} = dx^j"/dx^j dx^k'/dx^k w_{j"k'}

Then

w^{ij} w_{jk}


= dx^i/dx^i' dx^j/dx^j' dx^j"/dx^j dx^k'/dx^k w^{i'j'} w_{j"k'}
= dx^i/dx^i' dx^k'/dx^k w^{i'j'} w_{j'k'},

where I used

dx^j/dx^j' dx^j"/dx^j = delta_j'^j".

Also

delta^i_k = dx^i/dx^i' dx^k'/dx^k delta^i'_k'

So

w^{ij} w_{jk} = delta^i_k

and

w^{i'j'} w_{j'k'} = delta^i'_k'.

I think this is a standard proof, unless I mixed something up.

> >> Another trick if you've got a metric already: define the sum


> >> b = g + i w.
> >> Since g is symmetric, and w is antisymmetric, what can you
> >> say about b?

> >Re[b] = g is symmetric
> >Im[b] = w is antisymmetric
> >
> >b is a 2-form
> >
> >If x and y are parallel (wrt g)
> >
> >b(x,y) = g(x,y)
> >
> >If x and y are orthogonal (wrt g)
> >
> >b(x,y) = i w(x,y)

> Yes, but think about b as a matrix. What type of matrix has symmetric
> real part, and antisymmetric imaginary part? (You actually already
> gleaned what b can do in your reply to Toby's post, I think.)

You mean Hermitian? I tried to read a bit of Nakahara (not much) and it

fo...@my-deja.com

unread,
Oct 14, 2000, 8:43:56 PM10/14/00
to
In article <39E50E5A...@uvi.edu>,
George Jones <gjo...@uvi.edu> wrote:

> In article, <39DE4FFE...@uiuc.edu>, fo...@my-deja.com wrote:

> > Let h: M -> N be a map from a manifold M to a manifold N. Then
> > tangent vectors X,Y on M get pushed forward to h_*X, h_*Y on N. If N
> > is equipped with a metric g_N, then we can use h to pull back g_N to
> > a metric g_M on M (can we? I think so.)

> g_N can be pulled back to M because at each element of N, g_N is
> an element of (cotangent space) (x) (cotangent space).

> > g_M = h^* g_N
> >
> > and
> >
> > g_M(X,Y) = g_N(h_*X,h_*Y).

> What if M already has a metric?

Hmm... Well, if g_N is a physical metric defining lengths and angles on
N, and g_M is the pullback of g_N on N to M, then if g_M' is a physical
metric on M defining lengths and angles on M, then I'd hope g_M = g_M',
but I'm not sure.

> > Then in my naive world where there are no 1-forms, all I have is
> > covariant and contravariant tangent vectors and a metric. It is the
> > contravariant tangent vectors that get pushed forward, but the
> > covariant tangent vectors must be pulled back.

> Vector fields on M can't necessarily be pushed forward to vector
> fields on N. 1-forms on N can always be pulled back to 1-forms on
> M.

Yeah, I think I understand this. If p and q are two points in M, then
h(p) may equal h(q), so non intersecting curves in M might get mapped to
intersecting curves in N, so h could map two distinct tangent vectors in
M to the same tangent space in N. Hence, it will not be a vector field
because it has two vectors at a single point.

> I guess you define the distinction between contravariant and
> covariant by transformation properties. So, in your world, two
> vectors in the same space can be considered equal mathematically,
> but not equal because they transform differently. All the indices
> and associations make things damn ugly. In a basis-free approach,
> transformation properties are derived!

This is interesting. If they are equal mathematically, then they should
transform the same way. Maybe understanding how they are actually equal
will lead to better understanding of the whole thing. I need to think
about it :)

> > The contravariant basis e_i = d/dx^i are gotten from curves and can
> > easily be draw as arrows. When you have a metric, the covariant
> > basis e^i = g^{ij} e_j are also tangent vectors and may also be
> > represented by arrows, and their meaning is pretty clear: e^i is an
> > arrow "perpendicular" to all the other e_j for j != i. I think this
> > is very nice and visualizable. I do not like the idea of picturing
> > forms as extended surfaces where the value of the form on the vector
> > is the number of surfaces pierced because this does injustice to the
> > necessary locality that the whole idea of a tangent space should be
> > trying to instill.

> Tangent vectors should be visualized as arrows, but this won't
> do for a mathematical definition because a manifold M may not
> be embedded in some higher dimensional space, and thus there is
> nowhere for the arrows to stick out into. A definition of tangent
> vector that is intrinsic to the manifold M is needed, and this
> usually involves mappings of function spaces to R. A definition
> involving equivalence classes of curves may also be used.

Well, I guess, due to Whitney, you CAN embed any manifold into a higher
dimensional one, can't you? :) I think I understand what you mean
though.

> The tensor product of finite-dimensional vector spaces V and W
> is often defined as the space of real-valued bilinear maps on
> V*xW*. Those darn dual spaces again! They can be avoided by
> defining a basis for V (x) W to be B_V x B_W where B_V and B_W
> are bases for V and W. Essentially the tensor product space is
> the free vector space on B_V x B_W. There is also a more
> technical construction that: 1) avoids dual spaces; 2) works
> for infinite-dimensional spaces as well.

Nice. This is probably the way I would go.

> I was curious about how you would define tensor products
> without dual spaces, but, unless I missed something (quite
> possible), you didn't respond to this question.

I think somewhere in there I defined wedge of reciprocal bases e^i of V
and said that p-vectors were elements of the vectors space spanned by
these elements. This is basically defining B_V /\ B_W be a basis of V /\
W.

> I have never liked the surface-piercing pictorial motivation

> for forms either, but I found that the algebraic approach
> gradually grew on me.

Yes, I liked the algebraic definition so much I jumped on it and
concluded, "This is beautiful! This is the way things should be!" But
then I ran into these reciprocal bases and it seems there is another way
to do it, so I am second guessing myself.

> > If you have any further reasons why forms are
> > REALLY necessary, I'd love to hear about it.

> I never said that your approach is impossible, I was merely


> try to point out some of the myriad of things that need to be
> dealt with. I've known about using reciprocal vectors in
> Clifford algebra stuff for years.

Ok. This stuff is very new to me. I can't say that I am aware of
everything that goes on in Clifford algebras, but I have seen the
reciprocal bases defined there too. That was another source of my second
guessing.

> > Yes, but what I am interested in is waves and fields in
> > inhomogeneous media. Some people may scoff, but I think it is
> > possible to study the differential geometry of inhomogeneous (even
> > anisotropic) media. In fact, this is the only reason I am studying
> > it at all. I think there are a lot of subtle issues when you have
> > inhomogeneous, possibly even dispersive, media, that can be studied
> > using the tools of differential geometry.
> > The unification to *one* equation is only possible in free space, if
> > I remember correctly.

> In flat spacetime, use an inertial coordinate system to
> define the operator D = e^i d/dx^i. This definition is independent
> of which inertial system was used. Then
>
> D F = J
>
> is *the* Maxwell equation (including sources) in Cl(1,3), the
> Clifford algebra for spacetime. Something similar can be done
> in the smaller algebra Cl(3), which is isomorphic to the complex
> quaternions.

Again, this expression is ONLY valid in free space. Try putting in a
dielectric sphere. It doesn't work. You truly need two equations even in
Clifford algebra to accound for dielectric inhomogeneity (I think).

> Clifford algebra people can be quite dogmatic in their views,
> and I wrongly assumed these considerations were motivating you.

Well, it might have something to do with it. I'm in the process of
learning the standard presentation and stumbled into the geometric
algebra stuff. Then I went out and grabbed just about every book I could
find on Clifford algebras. I particularly liked: "Clifford Algebras and
Spinors," Pertti Lounesto.

> And if you're really interested in differential geometry, I
> encourage you to learn the standard way as well. Who knows,
> someday you may want to communicate with somebody who only knows
> it the standard way!

Believe me, I'm trying :)

> I don't know what you're getting at when talking about using
> differential geometry for inhomogeneous media, but it sounds
> interesting.

Yeah, this is somewhat related to why you cannot write Maxwell's
equations as *one* equation in inhomogeneous media.

Thanks,
Eric

[Moderator's note: perhaps Eric and George are using "Maxwell's
equations in free space" to mean different things. I get the
feeling George is using this to mean "vacuum Maxwell equations",
i.e. no current or charge density. But I get the feeling Eric
is using it to mean Maxwell's equations *with* charge and current
density, but no inhomogeneous dielectric medium built in - i.e.,
only a trivial relationship between D and E fields. - jb]

George Jones

unread,
Oct 15, 2000, 3:00:00 AM10/15/00
to
In article <8saukb$eta$1...@nnrp1.deja.com>, fo...@my-deja.com wrote:

> Well, I guess, due to Whitney, you CAN embed any manifold into a higher
> dimensional one, can't you? :) I think I understand what you mean
> though.

Yes, the Whitney embedding theorem shows that any n-dimensional
differentiable manifold can be embedded in some R^m, where
m <= 2n + 1, such that the differential structure of the
manifold is inherited from R^m. So, 9 dimensions will always
suffice for spacetime. However, if the manifold has a metric
that is inherited from a flat (pseudo-)metric for R^m, then
things are much worse. Chris Clarke has shown m <= 90 will
always work for any spacetime! So, up to 86 extra, non-physical
dimensions are required in order to have tangents vectors live
in some flat space.

> Well, it might have something to do with it. I'm in the process of
> learning the standard presentation and stumbled into the geometric
> algebra stuff. Then I went out and grabbed just about every book I could
> find on Clifford algebras. I particularly liked: "Clifford Algebras and
> Spinors," Pertti Lounesto.

Lounesto's book is quite interesting, and has a chapter on
electromagnetism. There are at least a couple of books devoted
to Clifford algebras applied to electromagnetism: "Mulivectors
and Clifford Algebra in Electrodynamics," by Bernard Jancewicz,
and "Electrodynamics: A Modern Geometric Approach," by William
E. Baylis. Lounesto references the first, but the second was
published after Lounesto's book.

A problem when reading Clifford algebra works is that the
community doesn't have consistent sets of notation and
terminology. Different authors will use the same notation or
the same term to mean different things.

Again, I urge: Don't forget about standard presentations of
differential geometry and tensor analysis.

> [Moderator's note: perhaps Eric and George are using "Maxwell's
> equations in free space" to mean different things. I get the
> feeling George is using this to mean "vacuum Maxwell equations",
> i.e. no current or charge density. But I get the feeling Eric
> is using it to mean Maxwell's equations *with* charge and current
> density, but no inhomogeneous dielectric medium built in - i.e.,
> only a trivial relationship between D and E fields. - jb]

Yes, I took free to mean source-free, while Eric meant
something like "charges are not bound" by free.

I may (or may not) make a comment or two in a later post on the
integration of differential forms.

Regards,
George

Paul Arendt

unread,
Oct 16, 2000, 3:00:00 AM10/16/00
to
In article <8s5d8h$2g20$1...@newshost.nmt.edu>,
Paul Arendt <par...@nmt.edu> spewed forth the following lie:
>... O(2n), the group of linear

>transformations preserving a given metric, is a *proper*
>subgroup of Sp(2n), the group of linear transformations
>preserving a given symplectic form.

Blah! O(2n) is NOT a subgroup of Sp(2n) -- there are metric-
preserving linear maps that are not symplectic-form preserving.

> There are symplectic-
>preserving transformations, like the "shear" one given above,
>that do not preserve the metric.)

However, this (which is the point I wanted to make) is still true.
So the less false statement would be: O(2n) intersect Sp(2n) is
generally a proper subgroup of both O(2n) and Sp(2n).


Toby Bartels

unread,
Oct 16, 2000, 3:00:00 AM10/16/00
to
George Jones wrote in part:

>Vector fields on M can't necessarily be pushed forward to vector
>fields on N. 1-forms on N can always be pulled back to 1-forms on
>M.

Right.
Individual vectors can be pushed forward,
but entire vector fields cannot.

>I guess you define the distinction between contravariant and
>covariant by transformation properties.

This is why vectors are really *covariant* (going in the same direction),
while forms are really *contravariant* (going in the opposite direction).
The *basis* of a tangent space is contravariant,
and the *basis* of a cotangent space is covariant,
so the old terminology called vectors "contravariant"
and covectors and forms "covariant", but that's dying out.


-- Toby
to...@math.ucr.edu


Toby Bartels

unread,
Oct 16, 2000, 3:00:00 AM10/16/00
to
Paul Arendt wrote:
>Eric Forgy wrote:
>>Paul Arendt wrote:
>>>Eric Forgy wrote:

>>>>w(e^i,e_j) = delta^i_j

>>> ** TILT! ** Poor little w doesn't know how to do this yet. It
>>>can only eat things with lower indices, until you teach it to do
>>>otherwise.

>>Maybe I didn't make it clear enough but e^i IS a tangent vector. I know the
>>superscripts makes it look a lot like a 1-form, but that is the whole point of
>>me asking my questions in the first place. This book I have "Vector and Tensor
>>Analysis with Applications", Borisenko and Tarapov [BT] spend an entire book
>>talking about covariant, contravariant, and mixed tensors but they do not once
>>introduce a 1-form.

>Well, I'm not sure what you mean by this! If you've got a (0,1) type
>tensor, you've got a one-form. It's like saying that there are no
>elephants at the zoo; only animals.

I think part of your confusion, Paul,
is that Eric doesn't necessarily mean
what you think he means by a symobl like "e^i" or "e_i".
Way back in his first post,
he defined e^i, using a metric and a basis {e_i},
in such a way that {e^i} is another basis for the vector space
and each e^i is a vector, not a covector.
Thus, w knows how to eat e^i,
because e^i is a vector, despite looking like a covector.

Here's a review:
Given: a vector space V, a metric g, and a basis {e_i}.
Let {d^i} be the basis of V* dual to {e_i}, and let e^i be g(d^i).
Now {e^i} is a basis of V, however much it may *look* like a basis of V*.

Then Eric's new thing was to do this with w in place of g.
Not *deriving* a metric g from w -- no metric enters into it then --
but *replacing* g with the symplectic form w.


-- Toby
to...@math.ucr.edu


John Baez

unread,
Oct 16, 2000, 3:00:00 AM10/16/00
to
In article <8rtv6q$4gb$1...@nnrp1.deja.com>, <fo...@my-deja.com> wrote:

>I guess another honest motivation is that I can "see"
>what a tangent vector is, but I cannot see what a form is and the
>typical pictorial attempt given in e.g. MTW leaves something to be
>desired in my opinion.

Their pictures are actually pretty good. Greg Egan has described
some other methods for picturing differential forms that are also good.

If you try to picture the gradient of a function without bringing
a *metric* into the story, you'll be forced to invent some way of
picturing 1-forms. I urge you to think about this.

You may not have noticed, but the old-fashioned picture of the
gradient as a "vector pointing in the direction that the function
increases most rapidly" uses a metric! After all, you need to know
how fast the function increases *per step of unit length* in a given
direction before you can draw this picture. The "unit length" concept
requires a metric! Changing the metric will change the direction of
the arrow you draw.

For this reason, when we're trying to rid ourself of metric-dependence
we need some other way of defining (and picturing) the gradient of a
function. The answer is to invent 1-forms and use the differential df
of a function f. We can visualize a 1-form by imagining a tiny packet
of parallel surfaces at a point: for example, the level surfaces of a
function very near that point.

>The contravariant basis e_i = d/dx^i are gotten from curves and can
>easily be draw as arrows. When you have a metric, the covariant basis
>e^i = g^{ij} e_j are also tangent vectors and may also be represented by
>arrows, and their meaning is pretty clear: e^i is an arrow
>"perpendicular" to all the other e_j for j != i. I think this is very
>nice and visualizable. I do not like the idea of picturing forms as
>extended surfaces where the value of the form on the vector is the
>number of surfaces pierced because this does injustice to the necessary
>locality that the whole idea of a tangent space should be trying to
>instill.

No more than the idea of picturing a tangent vector as an extended
arrow! This too is an injustice.

To get around this, one must picture tangent vectors as "infinitesimal"
arrows, and differential forms as "infinitesimal" packets of surfaces.

Of course one visualizes something "infinitesimal" by pretending it's
teeny-weeny. Visualization typically involves a process of creative
cheating. This is not bad.

>I really like thinking about this stuff and the more I do, the
>more I am becoming convinced that forms are really not necessary as long
>as you have a metric (preferrably a physical one defining lengths and
>angles) lying around.

Since I'm doing quantum gravity, I never have a metric lying around -
at least, never just *one*. So for me, differential forms are crucial.
Your mileage may vary.


Charles Francis

unread,
Oct 16, 2000, 3:00:00 AM10/16/00
to
In article <39DD38CD...@uiuc.edu>, thus spake Eric Forgy
<fo...@uiuc.edu>

>John Baez wrote:

>> In article <39DBE9AB...@uiuc.edu>, Eric Forgy <fo...@uiuc.edu> wrote:

>> >I've never heard of a variety of metrics. Is there such thing as a single
>> >manifold with two distinct metrics on it? What would that mean?

>> Remember, general relativity is not just the study of a *single* metric;


>> it's the study of *all* metrics satisfying Einstein's equations. Picking
>> one and setting it up as the "king" who decides how to identify covariant
>> vectors with contravariant vectors would be a thoroughly unnatural
>> business - just like picking one coordinate system and saying everybody
>> has to always work with this one.

Not so. Choosing a particular co-ordinate system is exactly what we do.
Doing anything else is thoroughly unnatural, like insisting that
everyone switches between imperial and metric units between every line
of calculation just because you can.

>Take for example two coallescing black holes. There is just one solution/metric
>that describes this, right? It wouldn't make sense to talk about two distinct
>solutions/metrics for this binary system would it? The solution should be
>unique.

>General relativity is not about studying the possible solutions (plural) to a
>specific system (I hope).

Given one metric you can immediately introduce whole families of
metrics. For example you can trivially multiply the distance between any
two points by a constant.

General relativity is just what it says, extremely general. It's
formalism takes into account that it really does not matter which co-
ordinate system or metric you choose, the manifold is the same. As a
result it is extremely abstract and difficult to understand and you
often find that relativists seem to completely forget that we actually
do work in particular co-ordinate systems with a particular metric - and
that these are not necessarily the metrics or co-ordinate systems which
they give as solutions to the equations.

You only need one solution in one co-ordinate system, but general
relativity recognises that this one solution is representative of a
whole family of solutions which are trivially equivalent. Unfortunately
the mathematical language is so convoluted and difficult that I have
know several relativists who seem to think that it is the relationships
described by these trivial equivalences which is the subject of
relativity, like schoolboys studying transformations, isometries, shears
and stretches etc, whereas in fact it is the underlying structure of the
manifold which is both unique and interesting.

--
Regards

Charles Francis
cha...@clef.demon.co.uk

Toby Bartels

unread,
Oct 17, 2000, 3:00:00 AM10/17/00
to
Paul Arendt wrote:

>However, this (which is the point I wanted to make) is still true.
>So the less false statement would be: O(2n) intersect Sp(2n) is
>generally a proper subgroup of both O(2n) and Sp(2n).

And, BTW, O(2n) intersect Sp(2n) is a very interesting group!
Anybody know what it is ... ?


-- Toby


Maxime Bagnoud

unread,
Oct 17, 2000, 3:00:00 AM10/17/00
to
fo...@my-deja.com wrote:

> Given a "particular" physical metric g on a manifold, which in my
> opinion is just like fixing a particular structure for my manifold, then
> we can define tangent vectors in the usual way. In a coordinate patch,
> we can define a basis
>
> e_i = d/dx^i.
>
> At each point in the coordinate patch, we can also define
>
> e^i = g^{ij} e_j.
>
> Then to get electromagnetics in an elegant way without using forms, we
> need an exterior algebra and an exterior derivative.

[unnecessary further quoted text deleted by angry gods]

What you are doing here is rewriting Maxwell theory of EM in terms of
p-forms.

You might call them totally antisymmetric covariant tensors of rank p if it
sounds better to you (it doesn't sound better to me and calling them
covariant p-vectors is certainly misleading), but these are called p-forms
everywhere in the literature,

> There are even certain advantages for doing it this way as opposed to
> using forms because tangent vectors are easy to visualize whereas forms
> require a stretch and I already stated my opinion about why you should
> not "visualize" forms as contours being "pierced" by arrows (it's a
> horrible nonlocal idea).

I didn't follow the "pierced contours" discussion, so I won't talk about
that, but a good way to visualize 1-forms (at least in R^3) is in the
meaning they have in cristallography as normal vectors of the planes in
your crystal.

> In fact I think that defining
>
> da = da/dx^i e^i
>
> as a covariant tangent vector is precisely just the gradient of a
> function.
>
> da
> = da/dx^i e^i
> = da/dx^i g^{ij} e_j
> = g^{ij} da/dx^i e_j
> = grad(a)

That's of course true, but what you call a covariant tangent vector IS a
1-form, not anything different as you're trying to argue.

> I understand there are instances where you study a manifold that might
> not have a metric (or symplectic structure), in which case this whole
> argument is moot because you cannot define the reciprocal basis.
> However, if you do have a metric or symplectic structure, I think there
> might be certain advantages for not using forms.

I rather think that there might be certain advantages (of clarity for the
audience) for using the standard terminology which calls a totally
anti-symmetric covariant tensor of rank p a p-form, which is short and
clear.

In addition, if you happen to study some supergravity in 10D in your life,
you will soon figure out that the form notation A_(6) is much more
efficient to use than the coordinate-dependant (index) notation A_{ijklmn}
when using forms of high degrees like a 6-from potential.
For example you have a dA_{1} /\ dA_{3} /\ dA_{3} term in the action, it's
quite cumbersome to rewrite it as an anti-symmetrized sum of products of
tensors indexed in a particular basis.

Regards,

Maxime

fo...@my-deja.com

unread,
Oct 17, 2000, 3:00:00 AM10/17/00
to
> [Moderator's note: perhaps Eric and George are using "Maxwell's
> equations in free space" to mean different things. I get the
> feeling George is using this to mean "vacuum Maxwell equations",
> i.e. no current or charge density. But I get the feeling Eric
> is using it to mean Maxwell's equations *with* charge and current
> density, but no inhomogeneous dielectric medium built in - i.e.,
> only a trivial relationship between D and E fields. - jb]

It is my understanding that George is talking about Maxwell's equation
WITH current and charge, but otherwise in free space. In this case, you
can choose suitable units and define

D = *_s E

and

B = *_s H,

where *_s is the 3D spatial Hodge star, D,B are 2-forms and E,H are 1-
forms. In this case, you can write down both the Clifford algebra
equilavents of

dF = 0

and

delF = j

into *one* single equation that looks like

DelF = J

where

Del F = Del . F + Del /\ F.

When I said "inhomogeneous media*, I meant

D = *_e E

and

B = *_u H,

where *_e is a 3D "electric" Hodge that incorporates dielectric
inhomogeneity and possibly anisotropy encoded with the usual metric
information. Likewise, *_u is a 3D "magnetic" Hodge that incorporates
magnetic inhomogeneity and possibly anisotropy along with the usual
metric information. You can (I think) combine these into a single
expression

G = *_m F

where *_m is a 4D "electromagnetic" Hodge that incorporates
inhomogeneities in permitivity and permeable as well as bianisotropy
(and even dispersive media!). Thus, all of these effects can be
incorporated into a metric. (Btw, is there a such thing as a
convolutive (or dispersive) metric??? i.e. a metric with a "memory",
that is, the measurement of length would depend on field previously in
the region. Like an impulse response.)

If you take your favorite solution to Einsteins equations and you want
to study classical EM in that spacetime, then the curvature can be
mapped directly to the constitutive relations. This will usually
manifest itself as an equivalent bianistropic media (try it! :))

The type of "media" you get by this is typically childs play compared
to what we usually study in CEM (comp EM) because there is no
reflection! Gravity only refracts, it does not reflect. In engineering
lingo, I'd say the equivalent "media" that results from gravity is
impedance matched :)

It might sound silly (but that has never stopped me before) in this
sense classical EM is more difficult than general relativity :) *duck*

The reason I say this is as follows (besides the fact that I never
really studied GR so I'm not qualified :)), given that you can take a
background spacetime metric (at least any sensible one) and convert it
into an equivalent inhomogeneous bianosotropic media, what happens if
you do the reverse? Take a highly inhomogeneous bianisotropic media and
translate all that material property into an equivalent metric!! What
do you get? You get TWO metrics! One electric metric and one magnetic
metric if you keep all the possible degrees of freedom allowed by
general anisotropic media. This is actually why I asked several
questions about having multiple metrics on a single manifold. I had
this in mind. Hence, in a possibly extremely convoluted fashion, you
can study the geometry of inhomogeneous media as a manifold consisting
of two distinct metrics. The two metrics lead to reflections where as
your standard simple gravitational curvature based on a single metric
can only lead to refraction. Worse yet, if your material is dispersive,
then this gets mapped to a "dispersive metric" and I have NEVER heard
of that :) A metric with a memory would be something, wouldn't it? :)

Stephen Speicher

unread,
Oct 17, 2000, 3:00:00 AM10/17/00
to

U(n) ?

Stephen
s...@compbio.caltech.edu

You can always tell a pioneer by the arrows in his back.

Printed using 100% recycled electrons.
--------------------------------------------------------


John Baez

unread,
Oct 18, 2000, 3:00:00 AM10/18/00
to
In article <8s7v25$9gb$1...@newshost.nmt.edu>,
Paul Arendt <par...@nmt.edu> wrote:

>So the less false statement would be: O(2n) intersect Sp(2n) is
>generally a proper subgroup of both O(2n) and Sp(2n).

And just to re-emphasize what's so cool about this: the group
O(2n) intersect Sp(2n) is famous! It's none other than U(n),
the n x n unitary matrices.

For more details check out

http://math.ucr.edu/home/baez/symplectic.html


Charles Francis

unread,
Oct 18, 2000, 3:00:00 AM10/18/00
to sci-physic...@moderators.isc.org
In article <8sa579$veg$1...@Urvile.MSUS.EDU>, thus spake John Baez
<ba...@galaxy.ucr.edu>

>Since I'm doing quantum gravity, I never have a metric lying around -
>at least, never just *one*. So for me, differential forms are crucial.

I have a slightly different take on this. Starting with a position space
as a basis, I define a vector space. I then introduce a metric
arbitrarily - like John say not just one metric, at this stage it could
be any metric. It is only an arbitrary mathematical definition, not a
physical thing. Momentum space is introduced in terms of that metric,
and I don't really need to think about differential forms. I think this
is quite important from a human angle, because we all have finite brain
power, and the more simplistic the manner in which you think about
things the more chance you have. At a later point in the development of
the theory I try to tie down the metric by looking at physical
measurement of distance and I reckon it has to satisfy Einstein's field
equation on the basis of detailed study of the physical processes in
measurement, but John doesn't believe me.

fo...@my-deja.com

unread,
Oct 18, 2000, 3:00:00 AM10/18/00
to
In article <39EC76DD...@unine.ch>,
Maxime Bagnoud <Maxime....@unine.ch> wrote:

> fo...@my-deja.com wrote:

> > Given a "particular" physical metric g on a manifold, which in my
> > opinion is just like fixing a particular structure for my manifold,
> > then we can define tangent vectors in the usual way. In a
> > coordinate patch, we can define a basis
> >
> > e_i = d/dx^i.
> >
> > At each point in the coordinate patch, we can also define
> >
> > e^i = g^{ij} e_j.
> >
> > Then to get electromagnetics in an elegant way without using forms,
> > we need an exterior algebra and an exterior derivative.
>

> [unnecessary further quoted text deleted by angry gods]
>
> What you are doing here is rewriting Maxwell theory of EM in terms of
> p-forms.
>
> You might call them totally antisymmetric covariant tensors of rank p
> if it sounds better to you (it doesn't sound better to me and calling
> them covariant p-vectors is certainly misleading), but these are
> called p-forms everywhere in the literature,

Thanks for your comments. I can't expect everyone to read every single
post, but this has been the theme of the entire thread (and in fact the
title of the subject heading). I tried to be very careful to make it
clear that e^i is understood to be a reciprocal basis of the tangent
space and not a 1-form (I know that Toby has gotten this because he
subsequently explained it to someone else) in the dual space.

e^i is exactly how I defined it.

e^i = g^{ij} e_j.

This is a linear combination of tangent vectors and is hence a tangent
vector. It is NOT an element of the dual space as a form is. A 1-form
is a linear functional, i.e. element of the dual space. e^i is an
element of the SAME space (see anyone of my previous posts to see that
e^i is in fact a tangent vector that is well defined as long as you
have a metric or symplectic structure). I think the use of the term
covariant tangent vector is justified. In fact, I can define
contravariant 1-forms in precisely the same way using reciprocal 1-
forms (I wish I had read "Gauge fields, knots, and gravity" BEFORE
starting this so I could have gotten the contra and co to be the more
correct way, but I will stick to the old fashioned "incorrect" usage of
contra and co :)).

dx_i = g_{ij} dx^j

I know this is ugly, but I CAN do it and I would call 1-forms expressed
in these reciprocal bases contravariant 1-forms. Then,

g(dx_i,dx^k)
= g_{ij} g(dx^j,dx^k)
= g_{ij} g^{jk}
= delta_i^k

> > In fact I think that defining
> >
> > da = da/dx^i e^i
> >
> > as a covariant tangent vector is precisely just the gradient of a
> > function.
> >
> > da
> > = da/dx^i e^i
> > = da/dx^i g^{ij} e_j
> > = g^{ij} da/dx^i e_j
> > = grad(a)
>

> That's of course true, but what you call a covariant tangent vector
> IS a 1-form, not anything different as you're trying to argue.

No it isn't a 1-form. That is the whole point. Perhaps I shouldn't have
used the same symbol "d", but I was trying to make the connection to
forms transparent. If I switch notation and refer to it as D, then

Da = da/dx^i e^i

is a tangent vector, not a 1-form. Given a metric, there IS a 1-form to
which this corresponds to

Da^* = g(Da,.) = da.

THAT is a 1-form. So I expressed Maxwell's equations NOT with p-forms,
but with p-vectors. Not only p-vectors but covariant p-vectors. I think
it is kind of neat :)

> I rather think that there might be certain advantages (of clarity for
> the audience) for using the standard terminology which calls a totally
> anti-symmetric covariant tensor of rank p a p-form, which is short and
> clear.

I hope you understand why I do not call them p-forms now. They are p-
vectors! :) More than that, they are covariant p-vectors expressed in
the reciprocal basis.

> In addition, if you happen to study some supergravity in 10D in your


> life, you will soon figure out that the form notation A_(6) is much
> more efficient to use than the coordinate-dependant (index) notation
> A_{ijklmn} when using forms of high degrees like a 6-from potential.
> For example you have a dA_{1} /\ dA_{3} /\ dA_{3} term in the action,
> it's quite cumbersome to rewrite it as an anti-symmetrized sum of
> products of tensors indexed in a particular basis.

That certainly makes sense :)

Thank you very much. Perhaps, I am becoming redundant, but explaining
this so many times has really solidified the idea in my head. Now I
KNOW that p-vectors are just as elegant as p-forms and can do
everything a form can do provided that you define an exterior algebra
and exterior calculus on your tangent space, and of course that you
have either a metric or symplectic structure (or both) lying around.

I will probably NEVER use this though and I will happily go back to
using forms, but by the confusion this has seemed to cause (maybe the
confusion is because of my bad explaining), I think that it is
something to be aware of. If I were ambitious, I am SURE that I can
rewrite your supergravity Lagrangian without using a form at all. Just
having p-vectors and a metric is all you need.

Thank you very much,
Eric

PS: I know that the tangent space V and cotangent space V^* are
isomorphic if you have a metric or symplectic structure, so it seems to
me that there is a tendency NOT to distinguish the two. All of these
questions are due to me trying to keep the two very distinct, but
showing that anything you can do in one, you can do in the other just
as well using reciprocal bases.

Toby Bartels

unread,
Oct 24, 2000, 3:00:00 AM10/24/00
to
Stephen Speicher wrote:

>Toby Bartels wrote:

>>And, BTW, O(2n) intersect Sp(2n) is a very interesting group!
>>Anybody know what it is ... ?

> U(n) ?

Exactly.

Recall the recurring theme on this newsgroup
that, if you have any 2 of the following, you have the other
(assuming the first 2 were compatible in a certain sense):
A) real inner product
B) real symplectic structure
C) complex structure
and that all 3 together give you a complex inner product (D).
Then (A) allows you to define O(2n),
(B) allows you to define Sp(2n),
and (D) allows you to define U(n).
That is the context where U(n) = O(2n) intersect Sp(2n) is true.


-- Toby
to...@math.ucr.edu


Paul Arendt

unread,
Oct 24, 2000, 3:00:00 AM10/24/00
to
In article <8sa804$uh3$1...@nnrp1.deja.com>, <fo...@my-deja.com> wrote:
> par...@nmt.edu (Paul Arendt) wrote:
>
>> In article <39DE4FFE...@uiuc.edu>, Eric Forgy <fo...@uiuc.edu>
>wrote:
>
>>>X = X^i e_i <- a tangent vector which they call a contravariant
>>> vector
>>>
>>>X = X_i e^i <- a tangent vector which they call a covariant vector
>
>> ***** WITHOUT A METRIC, YOU'VE NO WAY OF LEGITIMATELY *****
>> ***** GIVING BOTH OF THESE OBJECTS THE COMMON NAME, X. *****
>
>Exactly! They ARE both tangent vector because in Borisenko and Tarapov
>they HAVE a metric...

Okay -- I thought we were still talking about having a symplectic
form and NO metric, though.

>and e_i IS a tangent vector because it is defined as
>
>e_i = d/dx^i
>
>as usual, and e^i is ALSO a tangent vector because it is defined by
>
>e^i = g^{ij} e_j

All right -- it's a bit odd, but perfectly valid if you've got a
metric! But then I urge you to write the relation between the
two bases as
e^i . e_j = delta^i_j
with the "dot product" between them, since that's what you're
doing:
e^i . e_j = g^{ik} e_k . e_j = g^{ik} g_{kj} = delta^i_j.

Otherwise, it looks like you're evaluating a form against a vector
to some people. Well, to me at least. :-) And in fact, the
canonical one-form basis (which I'll spell out with dx^i since
e^i is already taken) satisfies the same relationship, without
the dot! In other words, "e^i dot", which is g(e^i,.), is
exactly the ONE-FORM dx^i. So the forms are hiding in your
formalism whether you like it or not. Here, I'll give you
an index-laden formula to prove it!

g(e^i,.) = g_{jk} dx^j g^{il} e_l (x) dx^k
= g_{jk} delta^j_l g^{il} (x) dx^k
= g_{jk} g^{ij} dx^k
= delta^i_k dx^k
= dx^i .

And in fact, you get these {dx^i} beasts even when you DON'T
have a metric lying around. But I suspect that you knew that
already. You derived nearly this same relation in some of
the rest of your post.

(Later, Eric wrote that: )
>... tangent vectors are easy to visualize whereas forms


>require a stretch and I already stated my opinion about why you should
>not "visualize" forms as contours being "pierced" by arrows (it's a
>horrible nonlocal idea).

The nice thing about the dx^i versus the d/dx^i is that the
former are "surfaces of constant x^i" while the latter are
"directions of increasing x^i, holding all x^j=0 for j<>i."
So if you change the coordinate y, for instance, you'll
generally change the arrow d/dx too, but the form dx
will stay the same. And as John Baez said: visualization
of forms as nonlocal things is no worse than the situation
with vectors.

(*Voice of James Earl Jones reverberating from inside a helmet:*
"Eric! If only you knew the power of the dark side! Open the
big black book again. Go on, look at the pretty pictures!")

>Given a tangent vector X = X^i e_i and a metric g, you can define the
>1-form
>
>X^* = g(X,.) = X_i dx^i,
>
>where
>
>X_i = g_{ij} X^j. So by looking at this expression alone you can not
>really conclude whether or not X_i is a component of a tangent vector
>written using the reciprocal basis or if it is a component of a 1-form

>...

I disagree here, but mildly. When you define something like X_i, you
are compelled (by order of the holy math doctrine bylaws) to say at
the outset what your basis is! If the basis are forms, you've got a
form; if vectors, you've got a vector.

>X_j = ... The SAME component as the 1-form.

That's true, but one of these beasts (together with its basis)
needs a "dot product" to evaluate it against a vector and get
a number out, while the other doesn't.

>...everything I said and everything I


>claimed was assuming we DID have a metric.

Whoops! My bad. Hmmm... but then you went on to say

>e^i is DEFINED as e^i = g^{ij} e_j or (e^i = w^{ij} e_j) and everything
>I said assumed we had a nondegenerate g or w :)

and it's the latter case that I was worried about. But I was
just confused by your notation; thought you were defining
e^i as a form somehow, in both the metric and symplectic cases.
And so the w(e^i,e_j) = delta^i_j confused me doubly: looked like
you were feeding w a form and a vector, firstly. Secondly, they
were fed in in an antisymmetric way, and you were getting a
symmetric (in i and j) result out! But I get it now: e^i is
NOT w(e_i,.) !

(feeding a bivector to a vector to get a one-form: )


>> >w* a bivector
>> >x a vector

>> >= (1/2 w^{ij} e_i /\ e_j) . (x^k e_k)
>> >= 1/2 w^{ij} x^k (e_i /\ e_j).e_k
>> >= 1/2 w^{ij} x^k [e_i w(e_j,e_k) - e_j w(e_i,e_k)]
>> >= 1/2 w^{ij} x^k (w_{jk}e_i - w_{ik}e_j)
>> >= 1/2 (delta^i_k x^k e_i + delta^j_k x^k e_j)
>> >= 1/2 (x^i e_i + x^j e_j)
>> >= x
>> >
>> >Whoa! I have no idea what just happened :)

Okay - the bivector w* is basically the representation of
w^(-1), the inverse operation to w. And your "dot
product" that gets rid of two vector indices is just
w, so you're basically applying w and w^(-1) to x
simultaneously, and getting x back.

But before, I wrote:
>> I think the magic was in the step assuming that
>> w^{ij} w_{jk} = delta^i_k .

Sorry; I misunderstood your definitions here! You're
*defining* this, not assuming it after defining w^{ij}
some other way. (And so Eric went through some lengthy
writing to convince me nothing was wrong. Sorry, Eric!)

Actually, the construction is pretty neat, even if the notation
set off alarms for me. Your e^i are vectors pointing in the
unique direction that has no subspace orthogonal to the one-form
w(e^i,.). In natural coordinates (which exist on any symplectic
manifold, by Darboux' theorem), each e^i will be *equal* to some
e_j, where j is definitely NOT equal to i!

However, I implore you to agree with me that writing
e^i = e_j, for some j<>i
looks weird, though! :-)

>> >> Another trick if you've got a metric already: define the sum
>> >> b = g + i w.
>> >> Since g is symmetric, and w is antisymmetric, what can you
>> >> say about b?
>

>> ...What type of matrix has symmetric


>> real part, and antisymmetric imaginary part?
>

>You mean Hermitian?

Right. And a Hermitian matrix always has real eigenvalues, so
we can diagonalize it by a change of basis and there will be
all real numbers on the diagonal. And so now, here's a tougher
one: what are the eigenvalues of the matrix 1/2 b?

To make it easier, suppose that we've been able to find a basis
in which b looks like

1 i 0 0 0 0 ...
-i 1 0 0 0 0 ...
0 0 1 i 0 0 ...
0 0 -i 1 0 0 ...
0 0 0 0 1 i ...
0 0 0 0 -i 1 ...
...
up to a 2n by 2n matrix. (We can actually always make b look
like diagonal 2 by 2 blocks that each look like
a i
-i a
for some real number a > 0 that depends upon which 2 by 2 block
we're looking at. If we're lucky, all the a's will be equal,
and something simple like 1.) In the usual case in mechanics
(the cotangent bundle of a configuration space), these coordinates
would be (x, px, y, py, z, pz, ...), where px is the conjugate
momentum to coordinate x, etc.

>I tried to read a bit of Nakahara (not much) and it
>sounds like you might be getting at a Kahler manifold or something with
>a complex metric,

If you demand that a metric be degenerate, then b is not a
metric. You will find this out if you perform the above exercise.
What happens has a lot to do with the little factoid about
Sp(2n) intersect O(2n) that was so neat, *two* folks from
Riverside had to say something about it. Unfortunately, one
of them posed a puzzle while the other simulaneously gave away
the answer...

>Thanks again for making me think :)

Likewise!

>I am fairly convinced now that if you have a (preferrably physical)
>metric g or symplectic structure w (or both) that you do not need to use
>forms at all.

True, but that's just a simple fact about any set of thingies that
has an isomorphism to another set of thingies. You can always use
the isomorphism to jump back and forth between them, so that
computations really only have to be performed on one of the sets.


Paul Arendt

unread,
Oct 24, 2000, 3:00:00 AM10/24/00
to
In article <8sapik$bff$1...@nnrp1.deja.com>, <squ...@my-deja.com> wrote:
>I tried to follow your discussion, and to you seem to be arguing over
>whether it's possible to define a corrsepondence between vectors and
>covectors on a symplectic manifold - i.e. using a non-degenerate anti-
>symmetric form rather than a metric.

Actually, I agreed with Eric Forgy back in the first exchange of
posts that this is indeed possible! What I was trying to contest
were just some of the formulas he wrote, namely
e^i e_j = delta^i_j
and also


w^{ij} w_{jk} = delta^i_k

... given the way I thought he'd defined the symbols in these things.
In particular, the first formula looks suspiciously like he'd found
a metric based upon the symplectic form alone (he used w to define
e^i), and my point is that while that's possible, it will depend
upon the coordinate system chosen.

(Before anybody points out that there's nothing wrong with the
above formulae: that's true if the {e^i} are the natural dual basis
to the {e_i}, but that's not what the way Eric wanted to define the
{e^i} -- I think! And if he did, then the relationship between the
{e^i} and {e_i} depends on the coordinate system, and so isn't
a "tensor" relationship like the metric provides.)

>Given a vector v, the covector v' is defined by v'(u) = w(v, u). Now,
>it may be verified this function is one-to-one as w is non-degenerate,
>so this provides an isomorphism between the tangent and cotangent
>spaces. There is only one exotic phenomenon here - when you construct w
>on the cotangent space using this isomorphism, and try to construct a
>new isomorphism, based on this new w, you get not the inverse of the
>former function, but the minus of this inverse (!). We discussed this
>with John Baez recently.

I must have missed that discussion. I'll try to reproduce it when I
reply to Eric's post, since w^{ij} is going to be the version of w
on the cotangent space you're describing above. (But you always
get the operation v -> -v on any vector space for free, so I'm
wondering how exotic the phenomenon can really be.)

Meanwhile, in article <8sag9v$17jp$1...@mortar.ucr.edu>, Toby Bartels
offered the following:

-I think part of your confusion, Paul,
-is that Eric doesn't necessarily mean
-what you think he means by a symobl like "e^i" or "e_i".

Yes - thanks! I think I'll try to introduce some new notation
in the next post, keeping them indices to a minimum.


Arkadas Ozakin

unread,
Oct 24, 2000, 3:00:00 AM10/24/00
to

Paul Arendt wrote:

> <fo...@my-deja.com> wrote:

[...]

> >e^i = g^{ij} e_j
>
> All right -- it's a bit odd, but perfectly valid if you've got a
> metric! But then I urge you to write the relation between the
> two bases as
> e^i . e_j = delta^i_j
> with the "dot product" between them, since that's what you're
> doing:
> e^i . e_j = g^{ik} e_k . e_j = g^{ik} g_{kj} = delta^i_j.

[...]

In the book Feynman Lectures on Gravitation, Feynman shows a geometric
interpretation of "covariant components" and "contravariant components".
He takes a vector in 2D and takes (non-perpendicular) "coordinate axes".
He says that if you draw parallel lines to the coordinate axes from the
tip of the vector, the intersections of those parallels with the
coordinate axes give you one type of components for your vector. If you
draw _perpendiculars_ to the coordinate axes from the tip of your vector,
the components you get are the other type of components. (The types I'm
referring to are covariant and contravariant components, you can figure
out which is which).

I had no idea what he was talking about when I first saw this. I knew
about tangent spaces and dual spaces, and representing both in the same
picture as he did didn't make sense to me. Later, I realized that what he
is doing is just what you mention above; he takes the basis _vectors_
corresponding to the dual basis covectors (or one-forms, if you wish).
Then, the components of the vector in the original basis and this dual
basis really work out in the way he says. I think it's kind of neat.

arkadas

Stephen Speicher

unread,
Oct 25, 2000, 3:00:00 AM10/25/00
to

Is there some reason you have left out (C)?

Doesn't (C) define the complex group GL(n,C), and then all _three_
groups have the common intersect = U(n)?

Toby Bartels

unread,
Oct 26, 2000, 10:56:18 AM10/26/00
to
Stephen Speicher wrote:

>Toby Bartels wrote:

>>Recall the recurring theme on this newsgroup
>>that, if you have any 2 of the following, you have the other
>>(assuming the first 2 were compatible in a certain sense):
>>A) real inner product
>>B) real symplectic structure
>>C) complex structure
>>and that all 3 together give you a complex inner product (D).
>>Then (A) allows you to define O(2n),
>>(B) allows you to define Sp(2n),
>>and (D) allows you to define U(n).
>>That is the context where U(n) = O(2n) intersect Sp(2n) is true.

>Is there some reason you have left out (C)?
>Doesn't (C) define the complex group GL(n,C), and then all _three_
>groups have the common intersect = U(n)?

Actually, it's more subtle than that.
I didn't think of it before, but it appears to me that
U(n) may be found as the intersection of any of the other 2:
U(n) = O(2n) ^ Sp(2n) = O(2n) ^ GL(n,\C) = Sp(2n) ^ GL(n,\C).


-- Toby
to...@math.ucr.edu

John Baez

unread,
Nov 3, 2000, 3:00:00 AM11/3/00
to
In article <8t9arf$15fm$1...@mortar.ucr.edu>,
Toby Bartels <to...@math-cl-n02.math.ucr.edu> wrote:

>An earlier verion of Toby wrote:

>>Recall the recurring theme on this newsgroup
>>that, if you have any 2 of the following, you have the other
>>(assuming the first 2 were compatible in a certain sense):
>>A) real inner product
>>B) real symplectic structure
>>C) complex structure

>I didn't think of it before, but it appears to me that


>U(n) may be found as the intersection of any of the other 2:
>U(n) = O(2n) ^ Sp(2n) = O(2n) ^ GL(n,\C) = Sp(2n) ^ GL(n,\C).

Hey, you're right! This is an immediate consequence
of what you wrote above. U(n) is the group of real-linear
transformations that preserve all 3 structures, but preserving
two automatically means you preserve the third.


0 new messages