A X B = e_{ijk}A^j B^k
Where e_{ijk} is the permutation tensor, i,j,k=0,...,N in
N-space, and Einstein summation is used.
Now, I once heard someone say that the cross-product A X B only
happens to give a vector, that is perpendicular to A and B, in
3-space. In any other space, this is not true. (Obvious for the
2-dimensional case). But if we look at the definition of the
outer product above, I doubt if that person is right.. ?
My first question is, is the cross-product really synonymous to
the outer product, or is the cross product something that is
only defined in Euclidian 3-space?
And my second question is, does the outer product as defined
above always give back a vector that is perpendicular to
both A and B, in N-dimensional space?
Cheers,
Willem
> If we define the outer product - denoted X - to be:
>
> A X B = e_{ijk}A^j B^k
>
> Where e_{ijk} is the permutation tensor, i,j,k=0,...,N in
> N-space, and Einstein summation is used.
There is no such tensor unless N = 3.
--
Timothy Murphy
e-mail: t...@birdsnest.maths.tcd.ie
tel: +353-86-233 6090
s-mail: School of Mathematics, Trinity College, Dublin 2, Ireland
So what then is the difference between the cross-product and the outer product?
If we stay in tensor land, then we could define the outer product between two
tensors, A and B, to give A (x) B right? So we sum the contravariant, and
covariant terms to get a new tensor. So if A is a (M, N) tensor and B
is a (O, P) tensor, A (x) B would be a (M+O, N+P) tensor.
But how does this relate to the cross-product? As these two terms always seem to
be interchangeable ad lib (people seem to, anyway).
"Timothy Murphy" <t...@birdsnest.maths.tcd.ie> wrote in message
news:fxjda.3157$pK2....@news.indigo.ie...
> So what then is the difference between the cross-product and the outer
> product? If we stay in tensor land, then we could define the outer product
> between two tensors, A and B, to give A (x) B right? So we sum the
> contravariant, and covariant terms to get a new tensor. So if A is a (M,
> N) tensor and B is a (O, P) tensor, A (x) B would be a (M+O, N+P) tensor.
Yes, I think the term outer product (or tensor product) of two tensors
is used in this sense.
> But how does this relate to the cross-product? As these two terms always
> seem to be interchangeable ad lib (people seem to, anyway).
The cross-product is defined in 3 dimensions exactly as you said.
I don't think the terms cross-product and outer product
_are_ (or should be) used interchangeably.
The cross-product is derived from the outer product
by contraction with the epsilon tensor, as you said.
Exactly my thought. Although it seems to me that a lot of people
do use the two terms interchangeably. But then again, I do hang out
with loads of pseudo-mathematicians (ie, computer scientists). ;)
If anyone could tell me whether or not I'm on the right track, I'd be much
obliged.
======================================================
The outer product and the cross product, are _definitely not_ terms for
one and the same thing, unlike many people would like to believe. The
same thing is true for the inner product, and the dot product, the latter being
defined as
A.B = Sum[i=1,..,N] A^iB^i (1)
N is usually 2, or 3.
The outer product and the inner product are defined in any dimension
(because they are _tensor equations_),
unlike the cross product and dot product, which only hold in
Euclidean 3-space (one could ofcourse argue that the dot product
_is_ also defined in R^2 or R^N like in (1)). The cross product
between two vectors also only _ever_ spits out another vector that
is orthogonal to the original ones in 3-space (and 7-space, but that's
an aside), in 2-space there can't possibly be a "vector" orthogonal to
the two, because that would imply going out of 2-space into a higher
dimension to define such a vector. In 4-space such a vector
can have infinitely many orientations, so "orthogonal" here
is not really defined.
So, apart from the cross product only being defined in 3-space,
if we could define it in N-space as an binary operation
f: R^N X R^N -> R^N, it would definitely not show the same
"behaviour" (ie, resulting vector is orthogonal) like it would in
Euclidean 3-space.
THE INNER PRODUCT
The inner product is really just a contraction of tensors, and
more generally, a contraction of two tensors with a third tensor,
called the metric tensor. In Euclidean space, this metric when
written down in matrix form, is the identity matrix, so we usually
omit its presence from the inner product equation. So we
define the inner product between two (1,0) tensors to be
A.B = g_{ij} A^{i} A^{j} (Einstein summation is used)
this is a tensor equation, and thus holds true in any basis,
frame, system. Note that the definition of the _dot product_
as given in (1) is _not_ a tensor equation, and thus only holds
in the space it was defined in.
THE OUTER PRODUCT
The outer product of two tensors of arbitrary rank gives a tensor
of their combined ranks. Suppose we have two (1,0) tensors (A, B) and
get their tensor product to yield a (2,0) tensor T = A(x)B.
The components of T can be put into a matrix. Now we
can obtain the symmetric, and anti-symmetric parts of this matrix
with :
T^{ij}(SYM) = 1/2(T^{ij} + T^{ji}) i,j=0,...,N
T^{ij}(ASYM)= 1/2(T^{ij} - T^{ij})
The anti-symmetric part of T^{ij} has diagonal elements which are
necessarily 0. Now take N=3, there are now 6 non-zero components of
T^{ij}(ASYM). Now apparently these 6 components can be arranged
into a vector in such a way that we obtain the equation for the
cross product. But _only_ when N=3.
---------
Two definitions of the cross product (N=3!!):
The cross product in tensor notation can also be defined as:
[1]
A X B = e_{ijk} A^{j} B^{k}
Where e_{ijk} is the permutation tensor, but this tensor is
only defined when _N=3_.
Another nice way to define the cross product is by calculating
the determinant:
[2]
In Euclidean 3-space, a nice way of calculating/defining the cross product
A X B, is to put the components of both vectors in a matrix like
so:
[ I J K ]
[ A.x A.y A.z ]
[ B.x B.y B.z ]
Where I, J, and K are the basis vectors for which the components
of A and B give the vectors A and B respectively. ( In other words
A and B are defined on the basis I J K). Now take the determinant
of this 3x3 matrix, and voila! We get the equation for the cross
product.
----------
Remember, the inner product and outer product are _tensor equations_
and are therefore valid in any frame, any basis, any coordinate
system, ANY dimension.
You seem to be talking about the exterior product. If so, your formula
is incorrect (as a general formula in n dimensions). The exterior
product of two vectors A, B in a vector space V lives in a different
vector space, V^V, which is the antisymmetrization of the tensor
product space of V with itself. The dimension of V^V is n(n-1)/2. For
the case n = 3, an inner product on V allows one to construct a
natural isomorphism between V and V^V, and one can identify A^B with a
vector in V via this isomorphism. For Euclidean 3-space, this
identifies A^B with the cross-product vector AxB. In general, V and
V^V have different dimensions, and it doesn't make sense to ask if A^B
is orthogonal to A and B.
John Mitchell
I think I was talking about a "contraction" (or, really, a partial
evaluation) of the permutation tensor and two (1,0) tensors, which
yields a (0,1) tensor. No tensor products; the equation I gave
is just a partial evaluation of a tensor.
> The exterior
> product of two vectors A, B in a vector space V lives in a different
> vector space, V^V, which is the antisymmetrization of the tensor
> product space of V with itself. The dimension of V^V is n(n-1)/2. For
I am not very familiar with differential forms, which I presume you
are talking about? Is the exterior product the same as the wedge
product, or are you going to tell me I should go and read up
on my tensor calculus on manifolds without a connection? As
I understand it, one does not need a metric or a connection on
a manifold to define the wedge product. (One does not need
a connection to define the exterior derivative).
If I understand you correctly you are saying that there exist a
map f: V^V -> V that is an isomorphism, and which - when
N=3 - maps the "exterior product" A^B to a vector in V, which happens
to be orthogonal to A, and B (both originally in V)? But _only_ when
N=3, right?
Cheers,
Willem H. de Boer
"Willem H. de Boer" wrote:
>
> If we define the outer product - denoted X - to be:
>
> A X B = e_{ijk}A^j B^k
>
> Where e_{ijk} is the permutation tensor, i,j,k=0,...,N in
> N-space, and Einstein summation is used.
>
> Now, I once heard someone say that the cross-product A X B only
> happens to give a vector, that is perpendicular to A and B, in
> 3-space. In any other space, this is not true. (Obvious for the
> 2-dimensional case). But if we look at the definition of the
> outer product above, I doubt if that person is right.. ?
>
If we require the cross product A X B to be:
(i) a vector perpendicular to A and B; and
(ii) the length of the resulting vector to be equal to the area of the
parallelogram formed by A and B,
then it is also possible to define such a product in R^7. Dimensions 3
and 7 are the only dimensions for which this is possible.
Given (n-1) vectors, we can define the cross product in n-dimensions of
all those vectors together using the exterior product in the Clifford
algebra Cl_n, and get a single vector which is
(i) perpendicular to each of the (n-1) generating vectors; and
(ii) having length equal to the hypervolume of the parallelpided formed
by the n-1 vectors.
See "Clifford Algebras and Spinors" by none other than the late Pertti
Lounesto. If he were still here, he would have ranted about the
congitive bugs of the other posters; then given the above information;
and then suggested you buy his book, as well as download CLICAL. I sort
of miss him at times.
For some more details, try a Google Group search on "cross product
Lounesto".
Cheers - Chas.
> The outer product and the cross product, are _definitely not_ terms for
> one and the same thing
True.
> The same thing is true for the inner product, and the dot product,
False. (See below.)
> the latter being defined as
>
> A.B = Sum[i=1,..,N] A^iB^i (1)
No. (See below.)
> [snip - cross product]
> In 4-space such a vector
> can have infinitely many orientations, so "orthogonal" here
> is not really defined.
In 4-space there are many directions orthogonal to a pair of vectors;
orthogonal is perfectly well defined (it means the inner or dot
product is zero), but a pair of (non-parallel) vectors does not define
a single orthogonal direction.
> [snip]
> So we
> define the inner product between two (1,0) tensors to be
>
> A.B = g_{ij} A^{i} A^{j} (Einstein summation is used)
(A^j should be B^j)
The tensor g has to satisfy certain properties (so that A.B is
positive definite symmetric).
> this is a tensor equation, and thus holds true in any basis,
> frame, system. Note that the definition of the _dot product_
> as given in (1) is _not_ a tensor equation, and thus only holds
> in the space it was defined in.
Given an inner product, as you have (correctly) defined it, you can
find coordinates such that A.B = sum_i A^i B^i .
Here is the right way to think about this.
An inner product on the real vector space V is a mapping VxV->R such
that:
(A+B).C = A.C + B.C
(sA).C = s(A.C) for scalar s
A.A>=0, and A.A=0 implies A=0
A.B=B.A .
Choose a basis (e_1,...,e_N) for (finite-dimensional) V. Then any
vector A in V can be uniquely written as
A = A^i e_i .
The first two conditions on the inner product imply that there is a
collection of numbers g_{ij} so that
A.B = g_{ij} A^i B^j .
The last condition implies that g_{ij}=g_{ji} for all i,j. (The third
condition, positive definiteness, is not as easy to describe in terms
of g, so I won't try.)
Now, choose another basis (f_1,...,f_N) for V. Then there exist
unique matrices L and M such that
e_i = L^j_i f_j
f_i = M^j_i e_j ;
L and M are inverses, L^i_j M^j_k = delta^i_k .
Now, remember that we defined our inner product without any reference
to a basis; it's just something that satisfies those four properties.
It was not until we introduced a basis that we obtained the numbers
g_{ij}. We can construct other numbers, h_{ij}, that represent the
inner product in terms of the basis (f). That is, if
A = a^i f_i ,
and similarly for B, then
A.B = h_{ij} a^i b^j .
Note that
A = A^i e_i = A^i L^j_i f_j = a^j f_j
so (since the expression of a vector with respect to a basis is
unique)
a^j = L^j_i A^i .
Thus (being careful with our indices)
A.B = h_{ij} a^i b^j
= h_{ij} L^i_k A^k L^j_m B^m
= (h_{ij} L^i_k L^j_m) A^k B^m
= g_{km} A^k B^m
so (since this holds for all A,B)
g_{km} = h_{ij} L^i_k L^j_m .
What I've just done here is derived the transformation rule for
(0,2)-tensors, starting with the definition that a (0,2)-tensor is a
bilinear mapping from VxV to R. (I never used the fact that it's an
inner product, which is a special kind of (0,2)-tensor.)
Now (as I said above) you can always find a basis such that
h_{ij}=delta_{ij}.
One often refers to the "usual R^N", or the "usual basis for R^N", or
the "usual inner product on R^N"; what this means is the basis
e_1 = [1;0;0;...;0]
e_2 = [0;1;0;...;0]
...
e_N = [0;0;0;...;1]
and the inner product defined by
e_i.e_j = delta_{ij} .
The terms "inner product" and "dot product" are, as far as I know,
synonymous, although the term "dot product" is most often used to
refer to the usual (or standard) inner product on R^N above.
Note, though, that a vector space does not come with an inner product
built in; if you want an inner product on a vector space, you have to
say which one you want. (There are "standard" inner products on all
the R^n, but one can also speak of R^n without referring to this
standard [or any other] inner product.) The outer or tensor product,
on the other hand, always exists.
> The outer product of two tensors of arbitrary rank gives a tensor
> of their combined ranks. Suppose we have two (1,0) tensors (A, B) and
> get their tensor product to yield a (2,0) tensor T = A(x)B.
> The components of T can be put into a matrix. Now we
> can obtain the symmetric, and anti-symmetric parts of this matrix
> with :
>
> T^{ij}(SYM) = 1/2(T^{ij} + T^{ji}) i,j=0,...,N
> T^{ij}(ASYM)= 1/2(T^{ij} - T^{ij})
>
> The anti-symmetric part of T^{ij} has diagonal elements which are
> necessarily 0. Now take N=3, there are now 6 non-zero components of
> T^{ij}(ASYM). Now apparently these 6 components can be arranged
> into a vector in such a way that we obtain the equation for the
> cross product. But _only_ when N=3.
Close.
There are many ways of looking at this. One is as you described.
(But while there are 6 non-zero coefficients, there are only 3
independent non-zero coefficients; it is those three that are arranged
into a vector in R^3.) More precisely, one can construct the
"exterior product", which is the antisymmetric part of the tensor
product. This gives you the exterior algebra, consisting of
(0,0)-tensors, (1,0)-tensors, ..., and (N,0)-tensors. (Or (0,0),
(0,1), ..., (0,N)-tensors.)
An inner product, together with an orientation, gives an isomorphism
(called the "Hodge star") between antisymmetric (p,0)-tensors and
antisymmetric (N-p,0)-tensors. The inner product also gives an
isomorphism between (p,q)-tensors and (p+1,q-1)-tensors (raising and
lowering indices).
Thus, when N=3, you have isomorphisms:
vectors
= (1,0)-tensors
--[Hodge]--> antisymmetric (2,0)-tensors
--[raising/lowering indices]--> antisymmetric (1,1)-tensors
= antisymmetric matrices .
Let us write the composition of these, vectors->matrices, as M. (That
is, if A is a 3-vector, then M(A) is an antisymmetric 3-by-3 matrix.)
Then there are a couple of nice things:
AxB = M(A)B
M(AxB) = [M(A),M(B)] = M(A)M(B)-M(B)M(A) .
(The first one gives you an easy way to construct M. The second one
is fairly sophisticated, and says that the vector space isomorphism M
is an algebra isomorphism from the algebra of cross products to the
Lie algebra so(3) of the rotation group, which is a nice fact when you
remember how velocity V, position R, and rotational [angular] velocity
W are related:
V=WxR .)
(Note that I snuck the word "orientation" in there. That's an
important but easily forgotten element in the construction of the
cross product.)
The exterior product is often denoted by something that looks sort of
like the character "^" (but is wider and is located a bit lower).
Perhaps due to the relationship between it and the cross product that
I described above, the cross product is sometimes called the exterior
product and denoted by the same "^"-like character. The exterior
product and the cross product are not, though, the same thing. And
the exterior and outer products are different as well, the former
being the antisymmetrization of the latter.
Kevin.
> I am not very familiar with differential forms, which I presume you
> are talking about? Is the exterior product the same as the wedge
> product, or are you going to tell me I should go and read up
> on my tensor calculus on manifolds without a connection?
The exterior product and the wedge product are exactly the same
thing. They are strictly linear algebra constructions, no notion of
differentiability involved. Perhaps unfortunately, they are often
not taught outside of a tensor analysis on manifolds course.
> As
> I understand it, one does not need a metric or a connection on
> a manifold to define the wedge product. (One does not need
> a connection to define the exterior derivative).
Both of these are true.
> If I understand you correctly you are saying that there exist a
> map f: V^V -> V that is an isomorphism, and which - when
> N=3 - maps the "exterior product" A^B to a vector in V, which happens
> to be orthogonal to A, and B (both originally in V)? But _only_ when
> N=3, right?
If you precede all of these with the hypothesis "N=3", and add the
additional hypothesis of oriented V with inner product, then yes. (My
other message explains where this map f comes from.)
Kevin.
True, but it's worth noting (for the O.P and Willem H. de Boer) that
one of the main applications of exterior algebra is the theory of
differential forms on a manifold. I think Spivak's "Calculus on
Manifolds" discusses this subject (with a title like that, I should
hope so!).
> > If I understand you correctly you are saying that there exist a
> > map f: V^V -> V that is an isomorphism, and which - when
> > N=3 - maps the "exterior product" A^B to a vector in V, which happens
> > to be orthogonal to A, and B (both originally in V)? But _only_ when
> > N=3, right?
>
> If you precede all of these with the hypothesis "N=3", and add the
> additional hypothesis of oriented V with inner product, then yes. (My
> other message explains where this map f comes from.)
>
> Kevin.
As a clarification (again for the O.P. and Willem H. de Boer), I only
claimed an isomorphism for n = 3. As I remarked earlier, in other
dimensions V and V^V don't even have the same dimension, so they
cannot be isomorphic.
John Mitchell
"Willem H. de Boer" wrote:
>
> My first question is, is the cross-product really synonymous to
> the outer product, or is the cross product something that is
> only defined in Euclidian 3-space?
Consider 4-space. The outer product a^b^c^d of four vectors gives
the volume of the 4-parallelepiped that they enclose. The product
a^b^c is a vector in the 4-space with basis w^x^y, w^x^z, w^y^z,
x^y^z, and if you identify w^x^y with z etc. ( with due attention
to sign ) then you can regard a^b^c^d as the dot product of two
4-vectors, and this generalizes to n-space for any n. This means
that you can assign an n-vector to an orientable (n-1)-volume in n-space.
Lew Mammel, Jr.
Willem
"Kevin Foltinek" <folt...@math.utexas.edu> wrote in message
news:wo6znnt...@linux60.ma.utexas.edu...
Yes, I vaguely remember the definition of the Hodge star.
>The inner product also gives an
>isomorphism between (p,q)-tensors and (p+1,q-1)-tensors (raising and
>lowering indices).
I was always taught that it is the metric that performs a mapping from
(p,q)-tensors to (p+1,q-1)-tensors (raising/lowering indices), but I assume
in the metric usually acts as the inner product of a space, as well?
>Thus, when N=3, you have isomorphisms:
> vectors
> = (1,0)-tensors
> --[Hodge]--> antisymmetric (2,0)-tensors
> --[raising/lowering indices]--> antisymmetric (1,1)-tensors
> = antisymmetric matrices .
Right - Back to my original question. Let's take N=3. If I understand your
text correctly we could take two (1,0)-tensors A, and B, and obtain the
wedge product (or exterior product) A^B, which yields a (2,0)-tensor.
Now from your text, the inner product acts as a map from (p,0)-tensors
to (N-p,0)-tensors (this map is an isomorphism, and is called the
Hodge star), in the case of our (2,0)-tensor A^B, the inner product
would map this to a (1,0)-tensor, let's call this C. Wow, by sheer
"luck" we got back a (1,0)-tensor C from our beloved inner product,
because N=3!.
Now I assume C will be orthogonal to both A and B. Right?
Cheers,
Willem
> I was always taught that it is the metric that performs a mapping from
> (p,q)-tensors to (p+1,q-1)-tensors (raising/lowering indices), but I assume
> in the metric usually acts as the inner product of a space, as well?
Yes. An inner product (or more generally, any bilinear mapping)
VxV->R can be thought of as a mapping V->V*. That is, given a
bilinear mapping (such as an inner product)
g : VxV->R ,
you automatically have an associated mapping
G : V->V* ,
defined by
[G(v)](w) = g(v,w) .
If g is nondegenerate (which an inner product is) then G is an
isomorphism. Since V=T(1,0) and V*=T(0,1) (the (1,0) and
(0,1)-tensors, respectively), G:T(1,0)->T(0,1); this induces mappings
T(p,q)->T(p-1,q+1) (one such mapping for each of the p indices) and
its inverse similarly induces mappings T(p,q)->T(p+1,q-1).
In differential geometry (and general relativity and other
applications) it is common to refer to a pointwise inner product (at
each vector space, there is an inner product) as a metric. The reason
for this is that such a choice of inner product gives you a notion of
arclength, hence geodesic, hence (minimal) distance between points,
thereby turning a (path-connected) space into a metric space.
When there is only one metric, it is commonly implicitly understood
that this metric is being used to raise and lower indices. When there
are multiple metrics, you must specify which metric is being used;
also, when there is more than one isomorphism V->V*, you must specify
which you are using for the raising/lowering. (Symplectic forms
induce isomorphisms V->V*; a symplectic form is a closed
non-degenerate 2-form, an antisymmetric bilinear map VxV->R.)
> Right - Back to my original question. Let's take N=3. If I understand your
> text correctly we could take two (1,0)-tensors A, and B, and obtain the
> wedge product (or exterior product) A^B, which yields a (2,0)-tensor.
> Now from your text, the inner product acts as a map from (p,0)-tensors
> to (N-p,0)-tensors (this map is an isomorphism, and is called the
> Hodge star)
Close but not quite. The Hodge star maps antisymmetric (p,0) to
antisymmetric (N-p,0), and it requires not just an inner product but
also an orientation. Note that it is a map on antisymmetric tensors,
not arbitrary tensors. It is not the inner product that maps A(p,0)
to A(N-p,0), it is the Hodge star.
> in the case of our (2,0)-tensor A^B, the inner product
Hodge star, not inner product.
> would map this to a (1,0)-tensor, let's call this C. Wow, by sheer
> "luck" we got back a (1,0)-tensor C from our beloved inner product,
> because N=3!.
Remember the orientation. (I am repeating this, because it is
important. A mere inner product alone does not give you this
isomorphism, and there is an orientation involved with the cross
product - remember the right-hand rule.)
> Now I assume C will be orthogonal to both A and B. Right?
Yes.
Kevin.