from sympy import *
from sympy.tensor.tensor import TensorIndexType, tensorsymmetry, TensorType
Lorentz = TensorIndexType('Lorentz', dummy_fmt='L')
sym2 = tensorsymmetry([1]*2)
S2 = TensorType([Lorentz]*2, sym2)
A = S2('A')
from sympy.tensor.tensor import tensor_indices, tensorhead
Lorentz.data = [1, -1, -1, -1]
i0, i1 = tensor_indices('i0:2', Lorentz)
A.data = [[j+2*i for j in range(2)] for i in range(2)]
A.data
A_symm = A(i0, i1) + A(i1, i0)
A_symm.data
Hello all,
I was wondering if anybody knew what the status of the Tensor module is - i.e., is work being done on it, what is the known functionality, and what is the intended functionality. For the past week I have been attempting to use Sympy for basic General Relativity (homework problems really), and it has not gone well.
My naive review of the documentation suggests that I should be able to use Einstein Notation, since I am completely unfamiliar with Penrose's abstract index notation (and it doesn't seem to be as useful computationally, frankly, though the wikipedia article on it is very short).
However, I keep encountering errors when I try to do basic operations such as add or multiply tensors, and I am never quite sure if they are bugs or features. A minimal example is that when you attempt to symmetrize a tensor you just get back double the tensor components, i.e., given a TensorHead A, and indices i0, i1, A(i0, i1) + A(i1, i0) should be manifestly symmetric, but it just results in 2*A(i0, i1). If necessary I can provide minimal working examples in code. I wasn't sure if this is the right place for that.
I honestly found the Tensor module very confusing from the standpoint of somebody who is interested in solving simple GR problems, so I'd like to try to understand the rationale behind it and make things clearer for other users.
I think the most instructive thing that I found confusing is was my attempt to symmetrize a tensor. This minimal snippet, taken almost entirely from the examples in documentation, should demonstrate my point.
from sympy import *
from sympy.tensor.tensor import TensorIndexType, tensorsymmetry, TensorType
Lorentz = TensorIndexType('Lorentz', dummy_fmt='L')
sym2 = tensorsymmetry([1]*2)
S2 = TensorType([Lorentz]*2, sym2)
A = S2('A')
from sympy.tensor.tensor import tensor_indices, tensorhead
Lorentz.data = [1, -1, -1, -1]
i0, i1 = tensor_indices('i0:2', Lorentz)
A.data = [[j+2*i for j in range(2)] for i in range(2)]
A.data
A_symm = A(i0, i1) + A(i1, i0)
A_symm.data
A.data is [[0, 1], [2, 3]], One would want A_symm.data to be [[0, 3],[3, 6]], a symmetric tensor. However, Sympy is computing it as 2*A. Indeed, that is exactly how sympy represents A_symm.
Use tensorsymmetry([1], [1]) if you don't want that.In [28]: tensorsymmetry([1,1]).generators
Out[28]: (Permutation(3)(0, 1),)
In [29]: tensorsymmetry([1],[1]).generators
Out[29]: (Permutation(3),)
I start off with this example because I am genuinely not sure if this is the intended behaviour, since it is unclear to me whether or not the Tensor module is supposed to support typical Einstein notation. If this is a "bug," then there are many things which must be tested and fixed. If this is a "feature," then I probably need to look for a different module to do GR with.
--
You received this message because you are subscribed to the Google Groups "sympy" group.
To unsubscribe from this group and stop receiving emails from it, send an email to sympy+un...@googlegroups.com.
To post to this group, send email to sy...@googlegroups.com.
Visit this group at http://groups.google.com/group/sympy.
To view this discussion on the web visit https://groups.google.com/d/msgid/sympy/b1b7eb06-13ad-4789-a721-11ec17abd0b2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
You are correct that I have defined a symmetric tensor and added asymmetric data! However, I took this example almost verbatim from the code examples in the documentation on the website for TensExpr. I believe it would be good to update the examples under TensExpr so that the declared symmetry matches the data.
Second - as a student approaching GR from a physics perspective, the need to declare a symmetry for the indices is very non-intuitive, especially because I have never studied permutations formally, and am therefore completely unfamiliar with the BSGS formalism. Would it be possible to create tensors which have only the identity permutation by default? That way, those of us who are relatively less concerned with canonicalization can use tensors without worrying about conflicts between the data we put in and the symbolic representation.
In [43]: A = tensorhead('A', [Lorentz]*2, [[2]])
In [44]: B = tensorhead('B', [Lorentz]*2, [[1]*2])
In [45]: expr = A(i0, i1)*B(-i0, -i1)
In [46]: expr.canon_bp()
Out[46]: 0
Third - I tried running my code with the symmetry definition you provided. The symbolic representation was correct, because I got
>>> A_symmHowever, the data had the same problem as before, i.e.,
>>> A(i0, i1) + A(i1, i0)
>>> A_symm.data
>>> array([[0, 2], [4, 6]], dtype=object)
The _TensorDataLazyEvaluator does not attempt to permute the indices in the _get function when working with a TensAdd expression. Instead, it just adds the indices in alphabetical order. So if you attempt
>>> T(i0, i1, i2) = B(i0, i1, i2) + C(i1, i0, i2) + D(i2, i0, i1)
>>> T(i0, i1, i2).data
The result will be incorrect, but more specifically, assuming i0 = 'i0', etc, the _get function will return you the data for
B(i0, i1, i2) + C(i0, i1, i2) + D(i0, i1, i2)
I actually know how to fix this (indeed, when hacking around on my own computer I did fix it) by checking the index positions and judicious use of the np.swapaxes function.
Covariant and partial differentiation I have actually implemented, in a hacky sort of way, but I'm running into bugs in the multiplication that I haven't tracked down yet.
I guess what I want to know is - Are you interested in taking the current sympy.tensor.tensor module and fixing these bugs? I am willing to spend some time implementing/fixing the problems with the _TensorDataLazyEvaluator class, I just don't know what is planned.
With regards to the diffgeom module, I found that to be unintuitive and even less documented than tensor. It was not really clear to me how I was supposed to define arbitrary tensors (i guess build them up from basevector fields? I didn't look into it much), and it is very very nice to have the Einstein index notation.
Keep in mind that I am not any kind of GR theorist or Differential Geometer, I'm an experimentalist who just wants an easy way to convert equations into code. Therefore, if you can tell me what needs to be done, I'm willing to work on supporting Einstein Index Notation in sympy.tensor.tensor.
You wrote you have some hackish stuff about derivatives, on github you can create a gist, which is usually a piece of code unrelated to a project, which people can see and comment.
If you realized, the tensor module only allows expanded polynomial expressions, that is difficultto change because the algorithm was written for that, and supporting something different would require a llot of rewrite.
I have a pending PR on the tensor module, which is a major rewrite of the code, but I'm not very satisfied of it.
An alternative is to expose special objects such as partial derivatives to the index structure of a tensor.
If you like to do something, try to see how to create a covariant derivative object that exposes the index structure.
Say, CovarDerivative(i, A(j, k)) should be treated as a tensor of indices i j k. You will realize that the way the tensor module was written makes this task quite hard.
Apart from the bug fixes, it's quite hard to go on rigorously.You wrote you have some hackish stuff about derivatives, on github you can create a gist, which is usually a piece of code unrelated to a project, which people can see and comment.
If you realized, the tensor module only allows expanded polynomial expressions, that is difficultto change because the algorithm was written for that, and supporting something different would require a llot of rewrite.
I have a pending PR on the tensor module, which is a major rewrite of the code, but I'm not very satisfied of it.
An alternative is to expose special objects such as partial derivatives to the index structure of a tensor.
If you like to do something, try to see how to create a covariant derivative object that exposes the index structure.
Say, CovarDerivative(i, A(j, k)) should be treated as a tensor of indices i j k. You will realize that the way the tensor module was written makes this task quite hard.
On Saturday, November 1, 2014 5:01:04 PM UTC-7, Francesco Bonazzi wrote:Apart from the bug fixes, it's quite hard to go on rigorously.You wrote you have some hackish stuff about derivatives, on github you can create a gist, which is usually a piece of code unrelated to a project, which people can see and comment.
I have made a gist with some code I wrote to do derivatives. https://gist.github.com/seniosh/ef0e1550110eec0c6b41#file-grtensor-py where can that then be discussed? I already realize several problems (partial differentiation gets treated as a Tensor, which is a problem if you want to switch coordinates), but it's a
starting point for discussion at least.
If you realized, the tensor module only allows expanded polynomial expressions, that is difficultto change because the algorithm was written for that, and supporting something different would require a llot of rewrite.
I have a pending PR on the tensor module, which is a major rewrite of the code, but I'm not very satisfied of it.
Should I move discussion to that PR? Or should I make a new one?
All current algorithms assume that TensMul always has index order and that it's the only one allowed to possess it. Obviously, if we introduce CovarDerivative(i0, A(i1, i1)), this should be exposed as a tensor with index order with indices [i0, i1, i2], which is not a TensMul.
An alternative is to expose special objects such as partial derivatives to the index structure of a tensor.
If you like to do something, try to see how to create a covariant derivative object that exposes the index structure.
Say, CovarDerivative(i, A(j, k)) should be treated as a tensor of indices i j k. You will realize that the way the tensor module was written makes this task quite hard.
When I implemented it, my CovarDerivative function operated on data, and then returned a new TensMul object, so CovarDerivative(i, A(j, k)) returns dA(i, j, k).
I think I understand the intended structure of the tensor.tensor module a bit more now. A lot of the things I have fooled around with are quick hacks that don't really fit the use of each class (i.e., just returning a TensMul from a covariant derivative function). A quick question - why is it true that TensAdd objects are not supposed to support index order? Is this a function of the other algorithms like canonicalization that I'm not as interested in?
I'd like to ask more generally what use a tensor is if the indices are unordered, but I suppose that just reveals my ignorance as a novice physicist - I think of Tensors in a pretty concrete matrix-based way, and when I'm not thinking of them as matrices, I'm thinking of them as functions, which requires an argument order.
I will take your advice and look more at sympy.diffgeom for the time being - do you know if it's possible to implement mixed tensors there? Perhaps it would be easier to build a TensorArray class that provides an interface more like sympy.tensor.tensor while using the implementation of sympy.diffgeom.
class TensorArray(Expr):
def __new__(cls, ndarray, shape, valence, coord_sys=None):
return Expr.__new__(ndarray, shape, coord_sys)
def numpy_to_TensorArray(ndarr, valence, coord_sys=None):
...
Thank you for taking the time to explain everything about the structure of the module.