Tensors and General Covariance

235 views
Skip to first unread message

Alan Grayson

unread,
Nov 8, 2025, 8:25:17 AMNov 8
to Everything List
In some treatments of tensors, they're described as linear maps. So, in GR, if we have a linear map described as a 4x4 matrix of real numbers, which operates on a 4-vector described as a column matrix with entries (ct, x, y, z), which transforms to another 4-vector, what must be added in this description to claim that the linear transformation satisfies the definition of a tensor? TY, AG

Alan Grayson

unread,
Nov 9, 2025, 4:11:47 AMNov 9
to Everything List
On Saturday, November 8, 2025 at 6:25:17 AM UTC-7 Alan Grayson wrote:
In some treatments of tensors, they're described as linear maps. So, in GR, if we have a linear map described as a 4x4 matrix of real numbers, which operates on a 4-vector described as a column matrix with entries (ct, x, y, z), which transforms to another 4-vector, what must be added in this description to claim that the linear transformation satisfies the definition of a tensor? TY, AG

Let's call the linear transformation T, then the answer to my question might be that T is a tensor iff it has a continuous inverse.  I'm not sure if this is correct, but I seem to recall this claim in a video about tensors I viewed in another life. But even if it's true, it seems to conflict with the claim that an ordinary vector in Euclidean space is a tensor because it's invariant under linear (?) transformations. In this formulation, it is the argument of T, which we can call V, which is invariant, not the map T. I'd appreciate it if someone here could clarity my confusion. TY, AG

Brent Meeker

unread,
Nov 9, 2025, 4:07:30 PMNov 9
to everyth...@googlegroups.com
A tensor is a geometric object (possibly in an abstract space).  It transforms covariantly; which means that changes in coordinates (even non-linear changes in coordinates) leave it the same.  A physical vector, like velocity, is a one-dimensional tensor.  The same vector, or tensor, has different representations in different coordinate systems.  Here's an excerpt from my general relativity lectures.

Vectors are used to represent things like motion, force, flow. You think of them as little arrows that show the direction of the motion, force or flow and the length of the arrow tells you the magnitude ot the motion, force or flow. So vectors have terms, one for each dimension of the space they are in. In the plane, which is two dimensional, vectors have two components. But the values of the components change depending on which coordinate system you choose to represent them. The coordinates are something we impose to facilitate our calculation. But the vector or tensor is a THING that is independent of the coordinates we use to describe them. Just as I could give directions from this building to the upper parking lot by saying it's 250ft that way or I could say it's 200ft north and 150ft west. The distance and direction would be the same, only the description is different.

Tensors are just a another step up from vectors. They describe how vectors are changing. Here's a good example of a vector field. It's called a field because there's a vector at each point. It shows the flow of water out of Monterey bay at a particular moment. 

For any particular flow field we can ask how do the vectors around a particular bit of water change as that bit of water is carried along by the flow. The answer is a tensor. Just as a vector is the abstraction of a little arrow, the tensor can be thought of an as abstraction of a little circle. And then the question becomes how does this circle get distorted as the water flows.

If the water is moving uniformly then the circle doesn't get distorted.  This is like the no-curvature tensor.  But the flow in Monterey Bay is not uniform, so the circle/tensor gets distorted.  

For flow like this in two dimensions the tensor is just an ellipse.  It has two directions corresponding to the axes of the ellipse and it has a size corresponding to the strength of the flow.  So it only takes three number to describe it.  The Txy component is the same as the Tyx component.   But this is a tensor FIELD. So there's a different tensor at each point.  It takes 3 numbers at each point.



If we choose some particular coordinate system then the components have interpretations like, “How much does the x flow speed change as you move in the y direction.”

Russell Standish

unread,
Nov 9, 2025, 5:06:45 PMNov 9
to everyth...@googlegroups.com
It's got nothing to do with being invertible (which is the conjunction
of being 1:1 and onto).

Rather a tensor is a multilinear map - is a map with multiple
arguments, and linear in each. Obviously a standard linear map R^n -> R^n is a
rank 2 tensor. We recognise them generally as matrices. Vectors
correspond to linear maps by means of transposing them and forming the
inner product, ie a linear map from R^n->R, and are rank 1 tensors as a result.



>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email
> to everything-li...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/everything-list
> /ab48cbec-a2b8-4006-92e5-52b1c17e271cn%40googlegroups.com.


--

----------------------------------------------------------------------------
Dr Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders hpc...@hpcoders.com.au
http://www.hpcoders.com.au
----------------------------------------------------------------------------

Alan Grayson

unread,
Nov 9, 2025, 6:49:16 PMNov 9
to Everything List
On Sunday, November 9, 2025 at 2:07:30 PM UTC-7 Brent Meeker wrote:


On 11/9/2025 1:11 AM, Alan Grayson wrote:


On Saturday, November 8, 2025 at 6:25:17 AM UTC-7 Alan Grayson wrote:
In some treatments of tensors, they're described as linear maps. So, in GR, if we have a linear map described as a 4x4 matrix of real numbers, which operates on a 4-vector described as a column matrix with entries (ct, x, y, z), which transforms to another 4-vector, what must be added in this description to claim that the linear transformation satisfies the definition of a tensor? TY, AG

Let's call the linear transformation T, then the answer to my question might be that T is a tensor iff it has a continuous inverse.  I'm not sure if this is correct, but I seem to recall this claim in a video about tensors I viewed in another life. But even if it's true, it seems to conflict with the claim that an ordinary vector in Euclidean space is a tensor because it's invariant under linear (?) transformations. In this formulation, it is the argument of T, which we can call V, which is invariant, not the map T. I'd appreciate it if someone here could clarity my confusion. TY, AG
A tensor is a geometric object (possibly in an abstract space).  It transforms covariantly; which means that changes in coordinates (even non-linear changes in coordinates) leave it the same. 

Round and round we go, but what a tensor is remains elusive! Please define the property that allows it to transform covariantly. Is it a map represented by a matix, and if so, what property must it have that allows it to transform covariantly? AG

Alan Grayson

unread,
Nov 9, 2025, 6:56:17 PMNov 9
to Everything List
On Sunday, November 9, 2025 at 3:06:45 PM UTC-7 Russell Standish wrote:
On Sun, Nov 09, 2025 at 01:11:47AM -0800, Alan Grayson wrote:
>
>
> On Saturday, November 8, 2025 at 6:25:17 AM UTC-7 Alan Grayson wrote:
>
> In some treatments of tensors, they're described as linear maps. So, in GR,
> if we have a linear map described as a 4x4 matrix of real numbers, which
> operates on a 4-vector described as a column matrix with entries (ct, x, y,
> z), which transforms to another 4-vector, what must be added in this
> description to claim that the linear transformation satisfies the
> definition of a tensor? TY, AG
>
>
> Let's call the linear transformation T, then the answer to my question might be
> that T is a tensor iff it has a continuous inverse.  I'm not sure if this is
> correct, but I seem to recall this claim in a video about tensors I viewed in
> another life. But even if it's true, it seems to conflict with the claim that
> an ordinary vector in Euclidean space is a tensor because it's invariant under
> linear (?) transformations. In this formulation, it is the argument of T, which
> we can call V, which is invariant, not the map T. I'd appreciate it if someone
> here could clarity my confusion. TY, AG

It's got nothing to do with being invertible (which is the conjunction
of being 1:1 and onto).

Rather a tensor is a multilinear map -

If it's a map, how can an ordinary vector in Euclidean space be a tensor?
Such vectors are NOT maps! See my problem? AG
 
is a map with multiple
arguments, and linear in each. Obviously a standard linear map R^n -> R^n is a
rank 2 tensor. We recognise them generally as matrices. Vectors
correspond to linear maps by means of transposing them and forming the
inner product, ie a linear map from R^n->R, and are rank 1 tensors as a result.



>

Russell Standish

unread,
Nov 9, 2025, 7:12:54 PMNov 9
to everyth...@googlegroups.com
On Sun, Nov 09, 2025 at 03:56:16PM -0800, Alan Grayson wrote:
>
> If it's a map, how can an ordinary vector in Euclidean space be a tensor?
> Such vectors are NOT maps! See my problem? AG


I did explain that in my post if you read it. In an inner product
space, every vector is isomorphic to a linear map from the space to
its field. Eg R^n->R in the case of the space R^n. That linear map is
the rank 1 tensor. In mathematics, something walks and quacks like a
duck is a duck.

Even the inner product operation is an example of a bilinear map,
hence a rank 2 tensor. In Minkowski spacetime, the inner product is
known as the Levi-Civita tensor.


--

----------------------------------------------------------------------------

Alan Grayson

unread,
Nov 9, 2025, 8:16:15 PMNov 9
to Everything List
On Sunday, November 9, 2025 at 5:12:54 PM UTC-7 Russell Standish wrote:
On Sun, Nov 09, 2025 at 03:56:16PM -0800, Alan Grayson wrote:
>
> If it's a map, how can an ordinary vector in Euclidean space be a tensor?
> Such vectors are NOT maps! See my problem? AG


I did explain that in my post if you read it. In an inner product
space, every vector is isomorphic to a linear map from the space to
its field. Eg R^n->R in the case of the space R^n. That linear map is
the rank 1 tensor. In mathematics, something walks and quacks like a
duck is a duck.

Even the inner product operation is an example of a bilinear map,
hence a rank 2 tensor. In Minkowski spacetime, the inner product is
known as the Levi-Civita tensor.

So a tensor is nothing more than a multi linear map to the reals? But if
we represent a tensor by a matrix, will it be automatically invariant 
under coordinate transformations? Do we need an inner product space
to define a tensor? TY, AG 

Alan Grayson

unread,
Nov 9, 2025, 8:24:32 PMNov 9
to Everything List
On Sunday, November 9, 2025 at 6:16:15 PM UTC-7 Alan Grayson wrote:
On Sunday, November 9, 2025 at 5:12:54 PM UTC-7 Russell Standish wrote:
On Sun, Nov 09, 2025 at 03:56:16PM -0800, Alan Grayson wrote:
>
> If it's a map, how can an ordinary vector in Euclidean space be a tensor?
> Such vectors are NOT maps! See my problem? AG


I did explain that in my post if you read it. In an inner product
space, every vector is isomorphic to a linear map from the space to
its field. Eg R^n->R in the case of the space R^n. That linear map is
the rank 1 tensor. In mathematics, something walks and quacks like a
duck is a duck.

Even the inner product operation is an example of a bilinear map,
hence a rank 2 tensor. In Minkowski spacetime, the inner product is
known as the Levi-Civita tensor.

So a tensor is nothing more than a multi linear map to the reals? But if
we represent a tensor by a matrix, will it be automatically invariant 
under coordinate transformations? Do we need an inner product space
to define a tensor? TY, AG 

If the tensor, represented by a matrix, is "unchanged" under a coordinate
transformation, does this mean its determinant is unchanged? AG 

Brent Meeker

unread,
Nov 9, 2025, 8:25:38 PMNov 9
to everyth...@googlegroups.com


On 11/9/2025 3:49 PM, Alan Grayson wrote:


On Sunday, November 9, 2025 at 2:07:30 PM UTC-7 Brent Meeker wrote:


On 11/9/2025 1:11 AM, Alan Grayson wrote:


On Saturday, November 8, 2025 at 6:25:17 AM UTC-7 Alan Grayson wrote:
In some treatments of tensors, they're described as linear maps. So, in GR, if we have a linear map described as a 4x4 matrix of real numbers, which operates on a 4-vector described as a column matrix with entries (ct, x, y, z), which transforms to another 4-vector, what must be added in this description to claim that the linear transformation satisfies the definition of a tensor? TY, AG

Let's call the linear transformation T, then the answer to my question might be that T is a tensor iff it has a continuous inverse.  I'm not sure if this is correct, but I seem to recall this claim in a video about tensors I viewed in another life. But even if it's true, it seems to conflict with the claim that an ordinary vector in Euclidean space is a tensor because it's invariant under linear (?) transformations. In this formulation, it is the argument of T, which we can call V, which is invariant, not the map T. I'd appreciate it if someone here could clarity my confusion. TY, AG
A tensor is a geometric object (possibly in an abstract space).  It transforms covariantly; which means that changes in coordinates (even non-linear changes in coordinates) leave it the same. 

Round and round we go, but what a tensor is remains elusive! Please define the property that allows it to transform covariantly. Is it a map represented by a matix, and if so, what property must it have that allows it to transform covariantly? AG
So you just read far enough to think of a question. 

Alan Grayson

unread,
Nov 9, 2025, 8:53:36 PMNov 9
to Everything List
BS! It's universally claimed that tensors remain unchanged under coordinate transformations, but rarely, if ever, is the property tensors must have to have this result. Your plot is no different. Does its matrix representation remain unchanged? Does this MEAN its determinant is unchanged? If not, please specify the property and test for this invariant. AG 

Brent Meeker

unread,
Nov 9, 2025, 10:01:50 PMNov 9
to everyth...@googlegroups.com


On 11/9/2025 5:24 PM, Alan Grayson wrote:


On Sunday, November 9, 2025 at 6:16:15 PM UTC-7 Alan Grayson wrote:
On Sunday, November 9, 2025 at 5:12:54 PM UTC-7 Russell Standish wrote:
On Sun, Nov 09, 2025 at 03:56:16PM -0800, Alan Grayson wrote:
>
> If it's a map, how can an ordinary vector in Euclidean space be a tensor?
> Such vectors are NOT maps! See my problem? AG


I did explain that in my post if you read it. In an inner product
space, every vector is isomorphic to a linear map from the space to
its field. Eg R^n->R in the case of the space R^n. That linear map is
the rank 1 tensor. In mathematics, something walks and quacks like a
duck is a duck.

Even the inner product operation is an example of a bilinear map,
hence a rank 2 tensor. In Minkowski spacetime, the inner product is
known as the Levi-Civita tensor.

So a tensor is nothing more than a multi linear map to the reals? But if
we represent a tensor by a matrix, will it be automatically invariant 
under coordinate transformations? Do we need an inner product space
to define a tensor? TY, AG 

If the tensor, represented by a matrix, is "unchanged" under a coordinate
transformation, does this mean its determinant is unchanged? AG 
No, in general it transforms like a density.  So it's only unchanged if the determinant of the transformation matrix is 1 

Brent



--

----------------------------------------------------------------------------
Dr Russell Standish Phone 0425 253119 (mobile)
Principal, High Performance Coders hpc...@hpcoders.com.au
http://www.hpcoders.com.au
----------------------------------------------------------------------------
--
You received this message because you are subscribed to the Google Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.

Alan Grayson

unread,
Nov 10, 2025, 12:55:16 AMNov 10
to Everything List
On Sunday, November 9, 2025 at 8:01:50 PM UTC-7 Brent Meeker wrote:


On 11/9/2025 5:24 PM, Alan Grayson wrote:


On Sunday, November 9, 2025 at 6:16:15 PM UTC-7 Alan Grayson wrote:
On Sunday, November 9, 2025 at 5:12:54 PM UTC-7 Russell Standish wrote:
On Sun, Nov 09, 2025 at 03:56:16PM -0800, Alan Grayson wrote:
>
> If it's a map, how can an ordinary vector in Euclidean space be a tensor?
> Such vectors are NOT maps! See my problem? AG


I did explain that in my post if you read it. In an inner product
space, every vector is isomorphic to a linear map from the space to
its field. Eg R^n->R in the case of the space R^n. That linear map is
the rank 1 tensor. In mathematics, something walks and quacks like a
duck is a duck.

Even the inner product operation is an example of a bilinear map,
hence a rank 2 tensor. In Minkowski spacetime, the inner product is
known as the Levi-Civita tensor.

So a tensor is nothing more than a multi linear map to the reals? But if
we represent a tensor by a matrix, will it be automatically invariant 
under coordinate transformations? Do we need an inner product space
to define a tensor? TY, AG 

If the tensor, represented by a matrix, is "unchanged" under a coordinate
transformation, does this mean its determinant is unchanged? AG 
No, in general it transforms like a density.  So it's only unchanged if the determinant of the transformation matrix is 1 

Brent

Someday I might find a teacher who can really define tensors, but that day has yet to arrive. Standish seems to come close, but does every linear multivariate function define a tensor? I'm waiting to see his reply. AG

Russell Standish

unread,
Nov 10, 2025, 1:16:05 AMNov 10
to everyth...@googlegroups.com
On Sun, Nov 09, 2025 at 09:55:15PM -0800, Alan Grayson wrote:
>
>
> Someday I might find a teacher who can really define tensors, but that day has
> yet to arrive. Standish seems to come close, but does every linear multivariate
> function define a tensor? I'm waiting to see his reply. AG

Well I did say multilinear function, but the answer is yes, every
multilinear function on a vector space is a tensor, and vice-versa.

I did write an 8 page article appearing in our student rag "The
Occasional Quark" when I was a physics student, which was my attempt
at explaining General Relativity when I was disgusted by the hash job
done by our professor. I haven't really thought about it much since
that time, though. I can also recommend the heavy tome by Misner,
Thorne and Wheeler.

I could scan the article and post it to this list, but not today - I
have a few other things on my plate before finishing up.

Alan Grayson

unread,
Nov 10, 2025, 1:34:38 AMNov 10
to Everything List
On Sunday, November 9, 2025 at 11:16:05 PM UTC-7 Russell Standish wrote:
On Sun, Nov 09, 2025 at 09:55:15PM -0800, Alan Grayson wrote:
>
>
> Someday I might find a teacher who can really define tensors, but that day has
> yet to arrive. Standish seems to come close, but does every linear multivariate
> function define a tensor? I'm waiting to see his reply. AG

Well I did say multilinear function, but the answer is yes, every
multilinear function on a vector space is a tensor, and vice-versa.

How does one prove that every multilinear function on a vector space is invariant under a 
change in coordinates? What exactly happens to its matrix representation? And Yes, please
post that short clarification defining tensors when you have time. AG

Russell Standish

unread,
Nov 10, 2025, 3:15:42 AMNov 10
to everyth...@googlegroups.com
On Sun, Nov 09, 2025 at 10:34:38PM -0800, Alan Grayson wrote:
>
>
> On Sunday, November 9, 2025 at 11:16:05 PM UTC-7 Russell Standish wrote:
>
> On Sun, Nov 09, 2025 at 09:55:15PM -0800, Alan Grayson wrote:
> >
> >
> > Someday I might find a teacher who can really define tensors, but that
> day has
> > yet to arrive. Standish seems to come close, but does every linear
> multivariate
> > function define a tensor? I'm waiting to see his reply. AG
>
> Well I did say multilinear function, but the answer is yes, every
> multilinear function on a vector space is a tensor, and vice-versa.
>
>
> How does one prove that every multilinear function on a vector space is
> invariant under a 
> change in coordinates? What exactly happens to its matrix representation? And

A vector is a geometric quantity having direction and length. As such
it is independent of any coordinate system that might be applied to
the space (although the list of numbers representing the components of
the vector in a given coordinate system must vary covariantly with the
coordinate system varying). A function operating on vectors, and
returning vectors or scalars must therfore also be independent of the
coordinate system.


> Yes, please
> post that short clarification defining tensors when you have time. AG


>
>
> I did write an 8 page article appearing in our student rag "The
> Occasional Quark" when I was a physics student, which was my attempt
> at explaining General Relativity when I was disgusted by the hash job
> done by our professor. I haven't really thought about it much since
> that time, though. I can also recommend the heavy tome by Misner,
> Thorne and Wheeler.
>
> I could scan the article and post it to this list, but not today - I
> have a few other things on my plate before finishing up.
>
>
>
> --
>
> ----------------------------------------------------------------------------
>
> Dr Russell Standish Phone 0425 253119 (mobile)
> Principal, High Performance Coders hpc...@hpcoders.com.au
> http://www.hpcoders.com.au
> ----------------------------------------------------------------------------
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email
> to everything-li...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/everything-list
> /0942b6c4-1648-4e0f-a4ab-3bf7d8b0b2c8n%40googlegroups.com.

Alan Grayson

unread,
Nov 10, 2025, 3:19:49 AMNov 10
to Everything List
On Sunday, November 9, 2025 at 2:07:30 PM UTC-7 Brent Meeker wrote:


On 11/9/2025 1:11 AM, Alan Grayson wrote:


On Saturday, November 8, 2025 at 6:25:17 AM UTC-7 Alan Grayson wrote:
In some treatments of tensors, they're described as linear maps. So, in GR, if we have a linear map described as a 4x4 matrix of real numbers, which operates on a 4-vector described as a column matrix with entries (ct, x, y, z), which transforms to another 4-vector, what must be added in this description to claim that the linear transformation satisfies the definition of a tensor? TY, AG

Let's call the linear transformation T, then the answer to my question might be that T is a tensor iff it has a continuous inverse.  I'm not sure if this is correct, but I seem to recall this claim in a video about tensors I viewed in another life. But even if it's true, it seems to conflict with the claim that an ordinary vector in Euclidean space is a tensor because it's invariant under linear (?) transformations. In this formulation, it is the argument of T, which we can call V, which is invariant, not the map T. I'd appreciate it if someone here could clarity my confusion. TY, AG
A tensor is a geometric object (possibly in an abstract space). 

It's an ALGEBRAIC object, NOT a geometric object, defined on a vector space, as a multilinear function which maps to a real number. The inner product function on an inner product space is a simple example of a tensor. To prove it's invariant under a change in coordinates, one must take two arbitrary vectors in the vector space under consideration, perform a change of coordinates, and prove the mapping of the function remains unchanged. When I figure out how that's done, I'll post it. AG

Russell Standish

unread,
Nov 10, 2025, 4:05:39 AMNov 10
to everyth...@googlegroups.com
On Mon, Nov 10, 2025 at 12:19:49AM -0800, Alan Grayson wrote:
>
> It's an ALGEBRAIC object, NOT a geometric object, defined on a vector space, as
> a multilinear function which maps to a real number.

Calling it an _algebraic_ object is exactly what obfuscates what
tensors are all about. Tensors are not a collection of numbers, just
as vectors are not collections of numbers. Vectors are geometric
objects, as are tensors.

Alan Grayson

unread,
Nov 10, 2025, 4:05:56 AMNov 10
to Everything List
On Monday, November 10, 2025 at 1:15:42 AM UTC-7 Russell Standish wrote:
On Sun, Nov 09, 2025 at 10:34:38PM -0800, Alan Grayson wrote:
>
>
> On Sunday, November 9, 2025 at 11:16:05 PM UTC-7 Russell Standish wrote:
>
> On Sun, Nov 09, 2025 at 09:55:15PM -0800, Alan Grayson wrote:
> >
> >
> > Someday I might find a teacher who can really define tensors, but that
> day has
> > yet to arrive. Standish seems to come close, but does every linear
> multivariate
> > function define a tensor? I'm waiting to see his reply. AG
>
> Well I did say multilinear function, but the answer is yes, every
> multilinear function on a vector space is a tensor, and vice-versa.
>
>
> How does one prove that every multilinear function on a vector space is
> invariant under a 
> change in coordinates? What exactly happens to its matrix representation? And

A vector is a geometric quantity having direction and length. As such
it is independent of any coordinate system that might be applied to
the space (although the list of numbers representing the components of
the vector in a given coordinate system must vary covariantly with the
coordinate system varying). A function operating on vectors, and
returning vectors or scalars must therfore also be independent of the
coordinate system.

This is OK, because a tensor is defined for vectors which are invariant 
under changes in coordinates. AG

Alan Grayson

unread,
Nov 10, 2025, 4:28:15 AMNov 10
to Everything List
On Monday, November 10, 2025 at 2:05:39 AM UTC-7 Russell Standish wrote:
On Mon, Nov 10, 2025 at 12:19:49AM -0800, Alan Grayson wrote:
>
> It's an ALGEBRAIC object, NOT a geometric object, defined on a vector space, as
> a multilinear function which maps to a real number.

Calling it an _algebraic_ object is exactly what obfuscates what
tensors are all about. Tensors are not a collection of numbers, just
as vectors are not collections of numbers. Vectors are geometric
objects, as are tensors.

I disagree. Vectors are geometric objects, but tensors are algebraic in
that they're defined as functions. AG 

Alan Grayson

unread,
Nov 10, 2025, 6:02:13 AMNov 10
to Everything List
On Sunday, November 9, 2025 at 2:07:30 PM UTC-7 Brent Meeker wrote:


On 11/9/2025 1:11 AM, Alan Grayson wrote:


On Saturday, November 8, 2025 at 6:25:17 AM UTC-7 Alan Grayson wrote:
In some treatments of tensors, they're described as linear maps. So, in GR, if we have a linear map described as a 4x4 matrix of real numbers, which operates on a 4-vector described as a column matrix with entries (ct, x, y, z), which transforms to another 4-vector, what must be added in this description to claim that the linear transformation satisfies the definition of a tensor? TY, AG

Let's call the linear transformation T, then the answer to my question might be that T is a tensor iff it has a continuous inverse.  I'm not sure if this is correct, but I seem to recall this claim in a video about tensors I viewed in another life. But even if it's true, it seems to conflict with the claim that an ordinary vector in Euclidean space is a tensor because it's invariant under linear (?) transformations. In this formulation, it is the argument of T, which we can call V, which is invariant, not the map T. I'd appreciate it if someone here could clarity my confusion. TY, AG
A tensor is a geometric object (possibly in an abstract space).  It transforms covariantly; which means that changes in coordinates (even non-linear changes in coordinates) leave it the same.  A physical vector, like velocity, is a one-dimensional tensor.  The same vector, or tensor, has different representations in different coordinate systems.  Here's an excerpt from my general relativity lectures.

Vectors are used to represent things like motion, force, flow. You think of them as little arrows that show the direction of the motion, force or flow and the length of the arrow tells you the magnitude ot the motion, force or flow. So vectors have terms, one for each dimension of the space they are in. In the plane, which is two dimensional, vectors have two components. But the values of the components change depending on which coordinate system you choose to represent them. The coordinates are something we impose to facilitate our calculation. But the vector or tensor is a THING that is independent of the coordinates we use to describe them. Just as I could give directions from this building to the upper parking lot by saying it's 250ft that way or I could say it's 200ft north and 150ft west. The distance and direction would be the same, only the description is different.

Tensors are just a another step up from vectors. They describe how vectors are changing. Here's a good example of a vector field. It's called a field because there's a vector at each point. It shows the flow of water out of Monterey bay at a particular moment. 

For any particular flow field we can ask how do the vectors around a particular bit of water change as that bit of water is carried along by the flow. The answer is a tensor. Just as a vector is the abstraction of a little arrow, the tensor can be thought of an as abstraction of a little circle. And then the question becomes how does this circle get distorted as the water flows.


Since a tensor maps to real numbers, the tensor field consists of real numbers, not as you have it, as vectors. AG 

Alan Grayson

unread,
Nov 10, 2025, 7:00:47 AMNov 10
to Everything List
On Monday, November 10, 2025 at 2:05:39 AM UTC-7 Russell Standish wrote:
On Mon, Nov 10, 2025 at 12:19:49AM -0800, Alan Grayson wrote:
>
> It's an ALGEBRAIC object, NOT a geometric object, defined on a vector space, as
> a multilinear function which maps to a real number.

Calling it an _algebraic_ object is exactly what obfuscates what
tensors are all about. Tensors are not a collection of numbers, just
as vectors are not collections of numbers. Vectors are geometric
objects, as are tensors.

A tensor field is, in fact, a collection of real numbers at different 
positions in the field. AG 

Brent Meeker

unread,
Nov 10, 2025, 1:58:41 PMNov 10
to everyth...@googlegroups.com
Why don't you read a book?  Russell gave you a definition.  I gave you an example.  Nobody wants to write a lot of math text online.

Brent

Alan Grayson

unread,
Nov 10, 2025, 3:07:13 PMNov 10
to Everything List
Why don't you read my comments before replying? I accept Russell's definiiton. Moreover, a tensor can be an inner product, and maps to a real numbers, so a tensor field is not like many arrows but real numbers. So your example is misleading. Like most teachers of tensors, you are averse to giving a precise definition, which Russell did. AG  

Alan Grayson

unread,
Nov 10, 2025, 3:53:47 PMNov 10
to Everything List
You write "Nobody wants to write a lot of math text online." Do you know that the proof that tensors are invariant to changes in coordinate systems does NOT involve writing any mathematics? It follows directly from the definition of tensors. AG

Alan Grayson

unread,
Nov 10, 2025, 10:50:29 PMNov 10
to Everything List
On Monday, November 10, 2025 at 1:15:42 AM UTC-7 Russell Standish wrote:
On Sun, Nov 09, 2025 at 10:34:38PM -0800, Alan Grayson wrote:
>
>
> On Sunday, November 9, 2025 at 11:16:05 PM UTC-7 Russell Standish wrote:
>
> On Sun, Nov 09, 2025 at 09:55:15PM -0800, Alan Grayson wrote:
> >
> >
> > Someday I might find a teacher who can really define tensors, but that
> day has
> > yet to arrive. Standish seems to come close, but does every linear
> multivariate
> > function define a tensor? I'm waiting to see his reply. AG
>
> Well I did say multilinear function, but the answer is yes, every
> multilinear function on a vector space is a tensor, and vice-versa.
>
>
> How does one prove that every multilinear function on a vector space is
> invariant under a 
> change in coordinates? What exactly happens to its matrix representation? And

A vector is a geometric quantity having direction and length. As such
it is independent of any coordinate system that might be applied to
the space (although the list of numbers representing the components of
the vector in a given coordinate system must vary covariantly with the
coordinate system varying). A function operating on vectors, and
returning vectors or scalars must therfore also be independent of the
coordinate system.

I think you deserve a special "THANK YOU!" Now, after many years, I finally
understand why tensors are invariant under changes in coordinate systems.
The problem has always been the reluctance or inability for most physicists
to explicity define the MATHEMATICAL definition of a tensor. They usually omit this
necessary definition by just asserting they're mathematical entities which
are invariant under changes in coordinate systems. The explanation is totally
simple; namely, tensors are defined on vectors in vector spaces, and these
vectors are invariant under changes in coordinate systems, so the function
defining tensors must likewise have this property since these vectors are
the domains on which these functions operate. NOT COMPLICATED! I think
you nailed it because your core orientation is more in mathematics than physics. 
Finally, please post your summary of GR when you have time. I am anxious to
read it. AG

> I did write an 8 page article appearing in our student rag "The
> Occasional Quark" when I was a physics student, which was my attempt
> at explaining General Relativity when I was disgusted by the hash job
> done by our professor. I haven't really thought about it much since
> that time, though. I can also recommend the heavy tome by Misner,
> Thorne and Wheeler.
>
> I could scan the article and post it to this list, but not today - I
> have a few other things on my plate before finishing up.
>---------------------------------------------------------------------------
>

Russell Standish

unread,
Nov 11, 2025, 8:37:20 PMNov 11
to everyth...@googlegroups.com
As promised below, my student article on differentiable manifolds can
now be found at
https://www.hpcoders.com.au/docs/differentiableManifolds.pdf

Apologies for the difficult to read font - this was done on Wordstar
and a dot matrix printer, before laser printers (and LaTeX) became a
thing.

Cheers
> --
> You received this message because you are subscribed to the Google Groups "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to everything-li...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/everything-list/aRGDGYo3lP00SAkG%40zen.

Alan Grayson

unread,
Nov 12, 2025, 8:28:56 AMNov 12
to Everything List
On Tuesday, November 11, 2025 at 6:37:20 PM UTC-7 Russell Standish wrote:
As promised below, my student article on differentiable manifolds can
now be found at
https://www.hpcoders.com.au/docs/differentiableManifolds.pdf

Apologies for the difficult to read font - this was done on Wordstar
and a dot matrix printer, before laser printers (and LaTeX) became a
thing.

Cheers

Thank you. It's readable by enlarging the font. Tell me if you agree with this
statement; a manifold is a topological space on which a coordinate system
is defined, but to actually define a coordinate system one needs additional
information, namely, that the topological space is also a metric space. But the
problem is that a metric space requires points to have labels, and I don't see
how points can have labels unless there's a pre-existing coordinate system.
IOW, there's a circularity here that I want to avoid, but I'm not sure how to do
so. AG

On Mon, Nov 10, 2025 at 05:15:53PM +1100, Russell Standish wrote:
> On Sun, Nov 09, 2025 at 09:55:15PM -0800, Alan Grayson wrote:
> >
> >
> > Someday I might find a teacher who can really define tensors, but that day has
> > yet to arrive. Standish seems to come close, but does every linear multivariate
> > function define a tensor? I'm waiting to see his reply. AG
>
> Well I did say multilinear function, but the answer is yes, every
> multilinear function on a vector space is a tensor, and vice-versa.
>
> I did write an 8 page article appearing in our student rag "The
> Occasional Quark" when I was a physics student, which was my attempt
> at explaining General Relativity when I was disgusted by the hash job
> done by our professor. I haven't really thought about it much since
> that time, though. I can also recommend the heavy tome by Misner,
> Thorne and Wheeler.
>
> I could scan the article and post it to this list, but not today - I
> have a few other things on my plate before finishing up.

Russell Standish

unread,
Nov 12, 2025, 5:49:23 PMNov 12
to everyth...@googlegroups.com
Bear in mind I have really thought about this stuff since my early 20s :).

Most of the areas of physics where this is used would be a metric
topological space, which can have a coordinate system (but not
necessarily uniquely defined).

I don't get the requirement that points can be labelled. IIUC, this
cannot be done in any continuous space anyway - there are uncountably
infinite more points in a continuous space than there are possible
labels.

Cheers
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email
> to everything-li...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/everything-list
> /0ac93a9d-2e3b-4713-b0f9-5923cfb811acn%40googlegroups.com.


--

Alan Grayson

unread,
Nov 13, 2025, 4:45:30 AMNov 13
to Everything List
On Wednesday, November 12, 2025 at 3:49:23 PM UTC-7 Russell Standish wrote:
Bear in mind I have really thought about this stuff since my early 20s :).

Most of the areas of physics where this is used would be a metric
topological space, which can have a coordinate system (but not
necessarily uniquely defined).

If we have a topological space, I don't think we have enough information to 
define a coordinate system. But if we say it's a plane, how can we define
the open sets without having points labeled in that space. That is, how do we
get the labels and/or locations without a preexisting coordinate system? It's
a primative question I am raising here. AG

Russell Standish

unread,
Nov 14, 2025, 1:14:13 AMNov 14
to everyth...@googlegroups.com
On Sat, Nov 08, 2025 at 05:25:16AM -0800, Alan Grayson wrote:
> In some treatments of tensors, they're described as linear maps. So, in GR, if
> we have a linear map described as a 4x4 matrix of real numbers, which operates
> on a 4-vector described as a column matrix with entries (ct, x, y, z), which
> transforms to another 4-vector, what must be added in this description to claim
> that the linear transformation satisfies the definition of a tensor? TY, AG

The operation of matrix multiplication is linear over the vectors on
the right hand side.

Is that what you're missing?

--

Alan Grayson

unread,
Nov 14, 2025, 8:12:47 PMNov 14
to Everything List
On Thursday, November 13, 2025 at 11:14:13 PM UTC-7 Russell Standish wrote:
On Sat, Nov 08, 2025 at 05:25:16AM -0800, Alan Grayson wrote:
> In some treatments of tensors, they're described as linear maps. So, in GR, if
> we have a linear map described as a 4x4 matrix of real numbers, which operates
> on a 4-vector described as a column matrix with entries (ct, x, y, z), which
> transforms to another 4-vector, what must be added in this description to claim
> that the linear transformation satisfies the definition of a tensor? TY, AG

The operation of matrix multiplication is linear over the vectors on
the right hand side.

Is that what you're missing?

Suppose T is a tensor which operates on vector v in some vector space. Then T(v)
maps to a real number and is linear in v. If we change coordinates, v's coordinates will
change but v remains the same, so T is invariant wrt coordinate transformations.
Now let's look at the situation using matrix representation and assume we're dealing
with a 4 dimensional spacetime manifold. Then T can be represented by a 4x4 matrix,
and v can be represented as a column matrix with entries ct, x, y, z.  When T operates 
on v, we get another column vector, and not a real constant! What am I doing wrong? 
Doesn't every tensor map to a real number? Also, if T is a function of two vectors, T(u,v),
then using matrix notation, how is T evaluated to get a real number as the result? Do 
we model u as a column matrix and v as a row matrix?  Thanks for your time. AG 

Russell Standish

unread,
Nov 14, 2025, 9:42:23 PMNov 14
to everyth...@googlegroups.com
T(u,v) = uᵀMv where M is the matrix representation of T, and ᵀ is the
transpose operator.
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email
> to everything-li...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/everything-list
> /c749d018-e829-4aab-83b1-c3e1523c9ccfn%40googlegroups.com.


--

Alan Grayson

unread,
Nov 14, 2025, 10:30:29 PMNov 14
to Everything List
On Friday, November 14, 2025 at 7:42:23 PM UTC-7 Russell Standish wrote:
T(u,v) = uᵀMv where M is the matrix representation of T, and ᵀ is the
transpose operator.

This is too succinct for me to understand your explanation. AG 

Alan Grayson

unread,
Nov 15, 2025, 11:22:23 PMNov 15
to Everything List
On Friday, November 14, 2025 at 8:30:29 PM UTC-7 Alan Grayson wrote:
On Friday, November 14, 2025 at 7:42:23 PM UTC-7 Russell Standish wrote:
T(u,v) = uᵀMv where M is the matrix representation of T, and ᵀ is the
transpose operator.

This is too succinct for me to understand your explanation. AG 

Is the above how a tensor is generally evaluated? How would it be evaluated if
T has three independent variables, u,v,w? AG

If u and v are modeled as row matrices, then u transposed is a colum vector, and
the total result is a real number. But I don't think it's easy to show that this is the
same value obtained by modeling the tensor as a linear function of u and v. Offhand.
do you have a link for showing the equivalence?  Further, in the case of T(u), how is 
the result a real number when using matrix notation? It looks like a vector when u
is modeled as a column vector. OTOH, I don't think we can model u as a row vector
to do the calculation in this situation (or can we?). AG 

Russell Standish

unread,
Nov 17, 2025, 11:54:00 PMNov 17
to everyth...@googlegroups.com
On Sat, Nov 15, 2025 at 08:22:23PM -0800, Alan Grayson wrote:
>
>
> On Friday, November 14, 2025 at 8:30:29 PM UTC-7 Alan Grayson wrote:
>
> On Friday, November 14, 2025 at 7:42:23 PM UTC-7 Russell Standish wrote:
>
> T(u,v) = uᵀMv where M is the matrix representation of T, and ᵀ is the
> transpose operator.
>
>
> This is too succinct for me to understand your explanation. AG 
>
>
> Is the above how a tensor is generally evaluated? How would it be evaluated if
> T has three independent variables, u,v,w? AG
>
> If u and v are modeled as row matrices, then u transposed is a colum vector,
> and
> the total result is a real number. But I don't think it's easy to show that
> this is the
> same value obtained by modeling the tensor as a linear function of u and v.
> Offhand.
> do you have a link for showing the equivalence?  Further, in the case of T(u),
> how is 
> the result a real number when using matrix notation? It looks like a vector
> when u
> is modeled as a column vector. OTOH, I don't think we can model u as a row
> vector
> to do the calculation in this situation (or can we?). AG 
>

This is Linear Algebra 101.

To convince yourself, try it with a 2x2 matrix to make the
calculations easier. Extending the result to Rⁿis not difficult.

Exercise:

Show that

(auᵀ₁+buᵀ₂)Mv = auᵀ₁Mv + buᵀ₂Mv

and

uᵀM(av₁+bv₂)=auᵀMv₁+ buᵀMv₂

for a,b∈R, uᵢ,vᵢ ∈ Rⁿ, and M a real valued matrix.

The above two lines are the definition of a bilinear function from Rⁿ⟶ R.

For a trilinear function, it is more convenient to use the Einstein
summation convention, but basically it works the same way.



--

Alan Grayson

unread,
Nov 19, 2025, 2:31:23 AM (13 days ago) Nov 19
to Everything List
If they're definitions, there's nothing to be shown or proven. But I have two
questions relating to this subject. First, since uᵀ is a column matrix, is it OK
to place it on the RHS of M, with the convention that Muᵀ is evaluated first, 
followed by the result being evaluated by applying v (a row matrix), so we
we get a real constant as the result? Second, if the function has one 
independent variable, say u, I don't see how we can use matrix notation to
evaluate the tensor To get a real value as the result. TY, AG

For a trilinear function, it is more convenient to use the Einstein
summation convention, but basically it works the same way.
 
----------------------------------------------------------------------------

Russell Standish

unread,
Nov 20, 2025, 4:02:07 AM (12 days ago) Nov 20
to everyth...@googlegroups.com
On Tue, Nov 18, 2025 at 11:31:22PM -0800, Alan Grayson wrote:
>
>
> On Monday, November 17, 2025 at 9:54:00 PM UTC-7 Russell Standish wrote:
> This is Linear Algebra 101.
>
> To convince yourself, try it with a 2x2 matrix to make the
> calculations easier. Extending the result to Rⁿis not difficult.
>
> Exercise:
>
> Show that
>
> (auᵀ₁+buᵀ₂)Mv = auᵀ₁Mv + buᵀ₂Mv
>
> and
>
> uᵀM(av₁+bv₂)=auᵀMv₁+ buᵀMv₂
>
> for a,b∈R, uᵢ,vᵢ ∈ Rⁿ, and M a real valued matrix.
>
> The above two lines are the definition of a bilinear function from Rⁿ⟶ R.
>
>
> If they're definitions, there's nothing to be shown or proven. But I have two
> questions relating to this subject. First, since uᵀ is a column matrix, is it
> OK
> to place it on the RHS of M, with the convention that Muᵀ is evaluated first, 
> followed by the result being evaluated by applying v (a row matrix), so we
> we get a real constant as the result? Second, if the function has one 
> independent variable, say u, I don't see how we can use matrix notation to
> evaluate the tensor To get a real value as the result. TY, AG

You mean uᵀv=u.v∈R? In this case u is a vector, and uᵀ is a rank 1 tensor.

Cheers
--

Alan Grayson

unread,
Nov 20, 2025, 4:07:19 AM (12 days ago) Nov 20
to Everything List
On Thursday, November 20, 2025 at 2:02:07 AM UTC-7 Russell Standish wrote:
On Tue, Nov 18, 2025 at 11:31:22PM -0800, Alan Grayson wrote:
>
>
> On Monday, November 17, 2025 at 9:54:00 PM UTC-7 Russell Standish wrote:
> This is Linear Algebra 101.
>
> To convince yourself, try it with a 2x2 matrix to make the
> calculations easier. Extending the result to Rⁿis not difficult.
>
> Exercise:
>
> Show that
>
> (auᵀ₁+buᵀ₂)Mv = auᵀ₁Mv + buᵀ₂Mv
>
> and
>
> uᵀM(av₁+bv₂)=auᵀMv₁+ buᵀMv₂
>
> for a,b∈R, uᵢ,vᵢ ∈ Rⁿ, and M a real valued matrix.
>
> The above two lines are the definition of a bilinear function from Rⁿ⟶ R.
>
>
> If they're definitions, there's nothing to be shown or proven. But I have two
> questions relating to this subject. First, since uᵀ is a column matrix, is it
> OK
> to place it on the RHS of M, with the convention that Muᵀ is evaluated first, 
> followed by the result being evaluated by applying v (a row matrix), so we
> we get a real constant as the result? Second, if the function has one 
> independent variable, say u, I don't see how we can use matrix notation to
> evaluate the tensor To get a real value as the result. TY, AG

You mean uᵀv=u.v∈R? In this case u is a vector, and uᵀ is a rank 1 tensor.

Cheers

Forget it. I see you're not really interested in answering my question. AG 

Russell Standish

unread,
Nov 20, 2025, 7:57:17 PM (11 days ago) Nov 20
to everyth...@googlegroups.com
I'm trying to guess what your question means, and then to answer it,

Maybe try rephrasing it.

I'm not trying to diss you, but this stuff really is Linear Algebra
101. I don't know if you ever studied that in the past, and forgotten
stuff, or got a confusing presentation on matrix calculations that
creates more confusion than enlightenment (a lot of mathematics
courses for engineers is like that, sadly), or that you completely
missed studying that.

In any case, it might be worthwhile you going through some of the
Wikipedia pages on the subject - generally Wikipedia does a fairly
good job on mathematical topics, though I can't specifically recommend
their treatment of Linear Algebra. In fact my Linear Algebra lecturer
specifically thought all textbooks on the subject were bunk,
admittedly this was 4 decades ago, so things might have improved.

>
>
> --
>
> ----------------------------------------------------------------------------
>
> Dr Russell Standish Phone 0425 253119 (mobile)
> Principal, High Performance Coders hpc...@hpcoders.com.au
> http://www.hpcoders.com.au
> ----------------------------------------------------------------------------
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email
> to everything-li...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/everything-list
> /49f41010-2de8-469a-9769-fbff590a4cafn%40googlegroups.com.

Alan Grayson

unread,
Nov 20, 2025, 10:02:53 PM (11 days ago) Nov 20
to Everything List
I studied linear algebra, but my questions involve tensors. If a tensor
T is defined as a linear function whose domain is a vector space, and 
maps to a real number, how does one get a real number from T(u), if we
do the calculation using matrices? Here there is no v, just u. AG

Russell Standish

unread,
Nov 21, 2025, 5:27:17 PM (11 days ago) Nov 21
to everyth...@googlegroups.com
On Thu, Nov 20, 2025 at 07:02:53PM -0800, Alan Grayson wrote:
>
>
> I studied linear algebra, but my questions involve tensors. If a tensor
> T is defined as a linear function whose domain is a vector space, and 
> maps to a real number, how does one get a real number from T(u), if we
> do the calculation using matrices? Here there is no v, just u. AG

A matrix corresponds to a rank 2 tensor, ie T(u,v)∈R. T(u)∈R
corresponds to a rank 1 tensor. In matrix notation, a rank 1 tensor is
a transposed vector, ie vᵀ for some vector v∈Rⁿ. vᵀu in matrix
notation corresponds to v.u (ie dot or inner product of two vectors).

Alan Grayson

unread,
Nov 22, 2025, 6:59:00 AM (10 days ago) Nov 22
to Everything List
On Friday, November 21, 2025 at 3:27:17 PM UTC-7 Russell Standish wrote:
On Thu, Nov 20, 2025 at 07:02:53PM -0800, Alan Grayson wrote:
>
>
> I studied linear algebra, but my questions involve tensors. If a tensor
> T is defined as a linear function whose domain is a vector space, and 
> maps to a real number, how does one get a real number from T(u), if we
> do the calculation using matrices? Here there is no v, just u. AG

A matrix corresponds to a rank 2 tensor, ie T(u,v)∈R. T(u)∈R
corresponds to a rank 1 tensor. In matrix notation, a rank 1 tensor is
a transposed vector, ie vᵀ for some vector v∈Rⁿ. vᵀu in matrix
notation corresponds to v.u (ie dot or inner product of two vectors).

I'm seeking an unambiguous definition of a TENSOR. You wrote earlier
that a tensor is a MAP whose arguments are VECTORS in a vector space,
which MAP to real numbers, and is INVARIANT under changes in coordinate
systems. Your definition seems OK, but upon more analysis I find it 
kind-of vacuous. Firstly, any function which depends on elements in a 
vector space which are invariant under changes in coordinate systems,
will necessarily be invariant under changes in coordinate systems, and
it doesn't matter if that function is linear or not in its arguments.  So, is
a tensor just limited by the condition of linearity of its arguments? The
invariance under coordinate transformations is a direct result of what
its arguments are, and since vectors are invariant, so the tensor T must
also have this property. That is, the invariance property of T is totally
dependent on the invariance property of its domain, the invariant vectors
in some vector space. TY, AG

Moreover, you claim an invariant vector is in fact a tensor of rank 1. 
What is its MAP? Why do you need to introduce v to evaluate T(u), 
which is just a function of u? Using matrices, there's no way to get
a constant as the result of T(u) (which I assume is a row matrix). 
I suppose a constant is a tensor of rank 0. What is its MAP, your entity
that DEFINES a tensor? TY, AG

Russell Standish

unread,
Nov 23, 2025, 6:17:45 PM (9 days ago) Nov 23
to everyth...@googlegroups.com
Correct so far. Yes - it seems trivial so far, but when you work out
how the components of the tensor change with changes of the
coordinate system you get the concept of covariance, and when you
apply the tensors to tangent spaces on Riemann manifolds, it become
decidedly non-trivial. When you're ready, you'll need to work through
coveriant differentiation, where there is an additional term coming
from curvature of spacetime.

>
> Moreover, you claim an invariant vector is in fact a tensor of rank 1. 

It is actually the transpose of a vector.

> What is its MAP?

The transpose uᵀ. Using tensor multiplication, uᵀv === u.v, where . is
the familiar inner product.


> Why do you need to introduce v to evaluate T(u), 

v is the vector corresponding to the tensor T.

> which is just a function of u? Using matrices, there's no way to get
> a constant as the result of T(u) (which I assume is a row matrix). 
> I suppose a constant is a tensor of rank 0. What is its MAP, your entity
> that DEFINES a tensor? TY, AG

Yes - a scalar is considered to be a rank 0 tensor. As you point out,
it is not really a map of anything, though, so I suppose it is the
exception to the rule that tensors are multilinear maps.

>
>
> ----------------------------------------------------------------------------
>
> Dr Russell Standish                    Phone 0425 253119 (mobile)
> Principal, High Performance Coders hpc...@hpcoders.com.au
> http://www.hpcoders.com.au
> ----------------------------------------------------------------------------
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email
> to everything-li...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/everything-list
> /5d526512-286f-4d53-b598-c790f7b6c347n%40googlegroups.com.


--

Alan Grayson

unread,
Nov 24, 2025, 6:33:04 AM (8 days ago) Nov 24
to Everything List
So far, the concept of tensors seems unrelated to the additional term
you allege that relates to differentiating the tensor field. Can you say
something more informative about this result? AG 

> Moreover, you claim an invariant vector is in fact a tensor of rank 1. 

It is actually the transpose of a vector.

> What is its MAP?
 
The transpose uᵀ. Using tensor multiplication, uᵀv === u.v, where . is
the familiar inner product.

> Why do you need to introduce v to evaluate T(u), 

v is the vector corresponding to the tensor T.

No. v has nothing to do with T(u), unless you explicitly define v as uᵀ. 
I suspect what your definition of tensors is lacking is any reference or
use of the dual space related to the vector space on which the domain
of T operates. AG 

Russell Standish

unread,
Nov 24, 2025, 8:14:42 PM (7 days ago) Nov 24
to everyth...@googlegroups.com
In a handwavy way, the additional term (aka Christoffel symbol) comes
from the fact that the coordinate system itself must change as the
tangent space changes from point to point in a Riemann manifold. Of
course, for the details, you need to work out how to translate tangent
spaces, which involve full on tensor calculus. From memory, Misner,
Thorne and Wheeler give a pretty good account of how to do that.

>
> > Moreover, you claim an invariant vector is in fact a tensor of rank 1. 
>
> It is actually the transpose of a vector.
>
> > What is its MAP?
>
>  
>
> The transpose uᵀ. Using tensor multiplication, uᵀv === u.v, where . is
> the familiar inner product.
>
> > Why do you need to introduce v to evaluate T(u), 
>
> v is the vector corresponding to the tensor T.
>
>
> No. v has nothing to do with T(u), unless you explicitly define v as uᵀ. 
> I suspect what your definition of tensors is lacking is any reference or
> use of the dual space related to the vector space on which the domain
> of T operates. AG 
>

Sorry - I switched notation in the previous paragraph. I think my
original post had T(u) = vᵀu - so v is the vector corresponding to T:
more accurately vᵀ is the tensor, and it is a vector in the dual
space. The terms covariant vectors (for the dual space) and
contravariant vectors (for the original tangent space) are also used,
to highlight how the components of these things change with coordinate
system change.

Since dual spaces are also vector spaces, one can equally talk about
the original vector u being a tensor, this time representing a map
from the dual space to the field (ie the reals).

--

Alan Grayson

unread,
Nov 25, 2025, 8:36:07 AM (7 days ago) Nov 25
to Everything List
TY. This is very useful. I had a fairly esoteric question which you didn't 
reply to. It is how, in a constructive sense, we can define a coordinate
system on a topological space, to convert it to a manifold. The answer
might be related to the Axiom of Choice. Specifically, say for a plane, 
how do we choose a point which we will call the origin of the coordinate
system, when there is no way to distinguish one point from another? AG 

Alan Grayson

unread,
Nov 25, 2025, 3:48:59 PM (7 days ago) Nov 25
to Everything List
> The transpose uᵀ. Using tensor multiplication, uᵀv === u.v, where . is
> the familiar inner product.
>
> > Why do you need to introduce v to evaluate T(u), 
>
> v is the vector corresponding to the tensor T.
>
>
> No. v has nothing to do with T(u), unless you explicitly define v as uᵀ. 
> I suspect what your definition of tensors is lacking is any reference or
> use of the dual space related to the vector space on which the domain
> of T operates. AG 

Sorry - I switched notation in the previous paragraph. I think my
original post had T(u) = vᵀu - so v is the vector corresponding to T:
more accurately vᵀ is the tensor, and it is a vector in the dual
space. The terms covariant vectors (for the dual space) and
contravariant vectors (for the original tangent space) are also used,
to highlight how the components of these things change with coordinate
system change.

Since dual spaces are also vector spaces, one can equally talk about
the original vector u being a tensor, this time representing a map
from the dual space to the field (ie the reals).
TY. This is very useful. I had a fairly esoteric question which you didn't 
reply to. It is how, in a constructive sense, we can define a coordinate
system on a topological space, to convert it to a manifold. The answer
might be related to the Axiom of Choice. Specifically, say for a plane, 
how do we choose a point which we will call the origin of the coordinate
system, when there is no way to distinguish one point from another? AG 

On other thing; when evaluating the tensor T(u), how do you know which
co-vector (member of dual vector space) to use, or doesn't it matter? 
Won't different co-vectors result in different real values for the tensor? AG 

Alan Grayson

unread,
Nov 27, 2025, 10:29:35 AM (5 days ago) Nov 27
to Everything List
IMO  there's no way to do this, which is why we have the Axiom of Choice, but in 
this case a situation where instead of having an uncountable collection of 
uncountable states, we have only a single uncountable state. So, using this
reduced situation, all we can say is that we "can" select one point on this
set, say to define an origin of coordinates, but we can't say how to select it. AG 

On other thing; when evaluating the tensor T(u), how do you know which
co-vector (member of dual vector space) to use, or doesn't it matter? 
Won't different co-vectors result in different real values for the tensor? AG 

This issue remains and seems important. If we choose the transpose of u to
evaluate the tensor, we get the inner product. But is this what Einstein means
for the tensors in his field equations? AG 

Alan Grayson

unread,
Nov 27, 2025, 10:35:44 AM (5 days ago) Nov 27
to Everything List
CORRECTION: I meant to write,  " ... instead of having an uncountable collection of
uncountable SETS, we have only a single uncountable SET." AG 

Russell Standish

unread,
Nov 28, 2025, 4:54:07 PM (4 days ago) Nov 28
to everyth...@googlegroups.com
On Tue, Nov 25, 2025 at 05:36:07AM -0800, Alan Grayson wrote:

>
> TY. This is very useful. I had a fairly esoteric question which you didn't 
> reply to. It is how, in a constructive sense, we can define a coordinate
> system on a topological space, to convert it to a manifold. The answer
> might be related to the Axiom of Choice. Specifically, say for a plane, 
> how do we choose a point which we will call the origin of the coordinate
> system, when there is no way to distinguish one point from another? AG 
>

The choice of origin is arbitrary. Generally, we choose a point that
makes calculations easier. When building a table, a better choice is
one of the corners of the table, not the Greenwich meridian (unless
you happen to be in Greenwich!). When working out the distance between
London and New York, the Greenwich meridian is quite appropriate.

--

Russell Standish

unread,
Nov 28, 2025, 5:17:02 PM (4 days ago) Nov 28
to everyth...@googlegroups.com
On Tue, Nov 25, 2025 at 12:48:59PM -0800, Alan Grayson wrote:
>
> On other thing; when evaluating the tensor T(u), how do you know which
> co-vector (member of dual vector space) to use, or doesn't it matter? 
> Won't different co-vectors result in different real values for the tensor? AG 

The set of linear functions from Rⁿ→R is a vector space. The numerical
values of the components of the vector depend on your chosen basis, of
course, which is quite arbitrary, however it is usually convenient to
choose a basis dᵢ of the dual space such that "orthoginality"
relations hold woith respect you chosen basis eⱼ of the original
vector space, ie:

dᵢ(eⱼ) = δᵢⱼ

Given any basis of a vector space, you can orthonormalise them by
means of an algorithm call "Gram-Schmidt orthonormalisation".


--

Russell Standish

unread,
Nov 28, 2025, 5:26:03 PM (4 days ago) Nov 28
to everyth...@googlegroups.com
On Thu, Nov 27, 2025 at 07:29:34AM -0800, Alan Grayson wrote:
>
>
> IMO  there's no way to do this, which is why we have the Axiom of Choice, but
> in 
> this case a situation where instead of having an uncountable collection of 
> uncountable states, we have only a single uncountable state. So, using this
> reduced situation, all we can say is that we "can" select one point on this
> set, say to define an origin of coordinates, but we can't say how to select it.
> AG 
>

Sorry - I can't make sense of your question.

>
> On other thing; when evaluating the tensor T(u), how do you know which
> co-vector (member of dual vector space) to use, or doesn't it matter? 
> Won't different co-vectors result in different real values for the tensor?
> AG 
>
>
> This issue remains and seems important. If we choose the transpose of u to
> evaluate the tensor, we get the inner product. But is this what Einstein means
> for the tensors in his field equations? AG 
>

Yes, although typically in Relativity, one lives in "Minkowski space",
ie the "inner product" is gⁱʲxᵢxⱼ where for flat spacetime, g⁰⁰=-1 and
gⁱʲ=δᵢⱼ for i>0, ie the axiom of positive definiteness is discarded.

The stuff about dual spaces, linear maps etc go over just fine to this
slightly more general situation, inner product spaces are not
required, but help the intuition.

--

Alan Grayson

unread,
Nov 30, 2025, 2:13:05 AM (2 days ago) Nov 30
to Everything List
On Friday, November 28, 2025 at 3:26:03 PM UTC-7 Russell Standish wrote:
On Thu, Nov 27, 2025 at 07:29:34AM -0800, Alan Grayson wrote:
>
>
> IMO  there's no way to do this, which is why we have the Axiom of Choice, but
> in 
> this case a situation where instead of having an uncountable collection of 
> uncountable states, we have only a single uncountable state. So, using this
> reduced situation, all we can say is that we "can" select one point on this
> set, say to define an origin of coordinates, but we can't say how to select it.
> AG 
>

Sorry - I can't make sense of your question.

The Axiom of Choice (AoC) asserts that given an uncountable set of sets, each one being
uncountable, there is a set composed of one element of each set of the uncountable set
of sets. The AoC doesn't tell us how such a set is constructed, only that we can assume it
exists. So, in chosing an origin for the coordinate system for a plane say, we have to apply
the AoC for a single uncountable set, the plane. But there's no way to construct it. Does
this make sense? AG 

Alan Grayson

unread,
Nov 30, 2025, 2:16:52 AM (2 days ago) Nov 30
to Everything List
On Friday, November 28, 2025 at 3:17:02 PM UTC-7 Russell Standish wrote:
On Tue, Nov 25, 2025 at 12:48:59PM -0800, Alan Grayson wrote:
>
> On other thing; when evaluating the tensor T(u), how do you know which
> co-vector (member of dual vector space) to use, or doesn't it matter? 
> Won't different co-vectors result in different real values for the tensor? AG 

The set of linear functions from Rⁿ→R is a vector space. The numerical
values of the components of the vector depend on your chosen basis, of
course, which is quite arbitrary, however it is usually convenient to
choose a basis dᵢ of the dual space such that "orthoginality"
relations hold woith respect you chosen basis eⱼ of the original
vector space, ie:

dᵢ(eⱼ) = δᵢⱼ

Given any basis of a vector space, you can orthonormalise them by
means of an algorithm call "Gram-Schmidt orthonormalisation".

I recall that theorem in a book I studied, Halmos, Finite Dimensional Vector
Spaces. AG 

Russell Standish

unread,
Dec 1, 2025, 5:46:40 PM (14 hours ago) Dec 1
to everyth...@googlegroups.com
On Sat, Nov 29, 2025 at 11:13:05PM -0800, Alan Grayson wrote:
>
>
> On Friday, November 28, 2025 at 3:26:03 PM UTC-7 Russell Standish wrote:
>
> Sorry - I can't make sense of your question.
>
>
> The Axiom of Choice (AoC) asserts that given an uncountable set of sets, each
> one being
> uncountable, there is a set composed of one element of each set of the
> uncountable set
> of sets. The AoC doesn't tell us how such a set is constructed, only that we
> can assume it
> exists. So, in chosing an origin for the coordinate system for a plane say, we
> have to apply
> the AoC for a single uncountable set, the plane. But there's no way to
> construct it. Does
> this make sense? AG 
>

I don't see the axiom of choice has much bearing here. To choose an
origin, we simply need to choose one point from a single uncountable
set of points. We label finite sets of points all the time - geometry
would be impossible otherwise - consider triangles with vertices
labelled A,B and C.

Indeed not only would geometry be impossible if we couldn't do this,
so would engineering.

Alan Grayson

unread,
Dec 1, 2025, 11:07:14 PM (8 hours ago) Dec 1
to Everything List
On Monday, December 1, 2025 at 3:46:40 PM UTC-7 Russell Standish wrote:
On Sat, Nov 29, 2025 at 11:13:05PM -0800, Alan Grayson wrote:
>
>
> On Friday, November 28, 2025 at 3:26:03 PM UTC-7 Russell Standish wrote:
>
> Sorry - I can't make sense of your question.
>
>
> The Axiom of Choice (AoC) asserts that given an uncountable set of sets, each
> one being
> uncountable, there is a set composed of one element of each set of the
> uncountable set
> of sets. The AoC doesn't tell us how such a set is constructed, only that we
> can assume it
> exists. So, in chosing an origin for the coordinate system for a plane say, we
> have to apply
> the AoC for a single uncountable set, the plane. But there's no way to
> construct it. Does
> this make sense? AG 
>

I don't see the axiom of choice has much bearing here. To choose an
origin, we simply need to choose one point from a single uncountable
set of points. We label finite sets of points all the time - geometry
would be impossible otherwise - consider triangles with vertices
labelled A,B and C.

You write "we simply need to choose one point from a single uncountable set
points", but how exactly can we do that! That's the issue, the construction of
the coordinate system. In fact, there's no credible procedure for doing that, so
we need the AoC to assert that it can be done. IMO, this is an esoteric issue. 
For example, we can't just assert we can use the number ZERO to construct
the real line, since with ZERO we have, in effect, a coordinate system.AG

Indeed not only would geometry be impossible if we couldn't do this,
so would engineering.

Russell Standish

unread,
12:37 AM (7 hours ago) 12:37 AM
to everyth...@googlegroups.com
Rubbish - it is not controversial to pick a set of points from a
finite set of uncountable sets. As I said, we've been doing that since
building ziggurats on the Mesopotamian plain. AoC is only
controversial when it comes to uncountable sets of uncountable sets.

>
> Indeed not only would geometry be impossible if we couldn't do this,
> so would engineering.
> ----------------------------------------------------------------------------
>
> Dr Russell Standish Phone 0425 253119 (mobile)
> Principal, High Performance Coders hpc...@hpcoders.com.au
> http://www.hpcoders.com.au
> ----------------------------------------------------------------------------
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an email
> to everything-li...@googlegroups.com.
> To view this discussion visit https://groups.google.com/d/msgid/everything-list
> /386f1c2f-928c-4f10-88a6-dc7983f31fbdn%40googlegroups.com.


--

Alan Grayson

unread,
4:24 AM (3 hours ago) 4:24 AM
to Everything List
It's subtle, maybe too subtle for you to see its relevance. You're imaginIng throwing
a dart at a flat piece of paper, but that falls far short of a viable construction of a 
coordinate system on a plane. You can imagine it being done and that's the extent
of your proof. AG 
Reply all
Reply to author
Forward
0 new messages