ToCanonical is very slow

206 views
Skip to first unread message

Petar

unread,
May 10, 2023, 5:39:59 PM5/10/23
to xAct Tensor Computer Algebra
Hi! I have the following expression:

expression.PNG
Calling ToCanonical on this is very slow (see attached example; I aborted after running it for more than an hour). In my actual problem I have a sum of thousands of such terms, so it is not feasible to spend more than an hour canonicalizing each term.

Does anyone have any suggestions on speeding up the canonicalization?

Thanks!!




example.nb

Thomas Bäckdahl

unread,
May 10, 2023, 6:46:28 PM5/10/23
to xa...@googlegroups.com
Hi!

This is problem which unfortunately can appear in a few cases when you have many indices.

ToCanonical is designed around an algorithm for finding a canonical double coset representative of two groups. In most cases this algorithm is very efficient, but there are some edge cases when the algorithm takes much longer. Jose might be able to tell a bit more about this.
In your case the two groups have 41287680 and 1428329123020800 elements respectively, so there are a few combinations to analyze.

In general it is a good idea to keep the number of indices down, but sometimes this is not possible.
If one encounters these edge-case problems, one can try to reformulate the problem.
For instance, in your case the following seems to be much quicker.
ToCanonical[ContractMetric@GradMetricToChristoffel[expr, met, PD], UseMetricOnVBundle -> None]
I don't know if this is what you want to do though.

There are also other ideas on different algorithms for the canonicalization, but they are unfortunately not implemented in xAct yet.

Perhaps if you explain a bit about where these expressions come from, we might be able to come up with alternatives that will not require canonicalizing thousands of terms with 28 indices each.

Regards
Thomas
--
You received this message because you are subscribed to the Google Groups "xAct Tensor Computer Algebra" group.
To unsubscribe from this group and stop receiving emails from it, send an email to xact+uns...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/xact/edc72e92-804a-40c9-88ce-0fdf9e71cdb8n%40googlegroups.com.

Jose

unread,
May 10, 2023, 10:49:39 PM5/10/23
to xAct Tensor Computer Algebra
Hi,

I agree with all Thomas said. As discussed by Ben Niehoff in section 3.4 of his article https://arxiv.org/pdf/1702.08114.pdf , the worst case scenarios for monoterm canonicalization are precisely contractions with objects of high symmetry. In this case we have many metric factors, each symmetric and all interchangeable, so there is quite a lot of symmetry.

Using partial derivatives usually complicates canonicalization substantially, because indices cannot be freely moved up and down. Mixing partial derivatives of the metric and Christoffel tensors also adds complication because one can be converted into the other. The former problem is alleviated by using UseMetricOnVBundle -> None. The latter by converting metric derivatives into Christoffels. Those are exactly the improvements Thomas suggested.

Cheers,
Jose.

Petar

unread,
Jun 5, 2023, 1:22:29 PM6/5/23
to xAct Tensor Computer Algebra
Thank you both for the replies! 

In response to Thomas, here is some context about my problem. For reasons that I won't go into, what I want to compute is Box(1/sigma(x,x')), where sigma(x,x') is the geodesic distance between two points x and x' in a general manifold, and Box is the d'Alembertian operator acting on the point x. sigma is a covariant quantity (it is a biscalar), but a closed form expression for sigma doesn't exist in general. So I expand sigma in an arbitrary coordinate system as follows. I define V^a = X^a - X'^a, where X^a and X'^a are the coordinates of the points x and x'. (Note that V^a is not a covariant object (it depends on the choice of coordinate system), but by pretending that it's a contravariant vector I can use xTensor accordingly). Then the short distance expansion of sigma is

sigma = g_{ab} V^a V^b + \Gamma_{abc} V^a V^b V^c + ...

where the metric and Christoffel are evaluated at the point x. I am showing the expansion to order |V|^3. The ... are higher order terms in |V|, containing derivatives of Christoffels. For my purposes I need Box(1/sigma) to order |V|^2. The final result contains many terms with lots of indices. For example one such term is

Capture.PNG

I would like to canonicalize these terms in order to simplify my expression, but ToCanonical is very slow. (In the example that I originally gave I had converted the Christoffels to derivatives of the metric, but ToCanonical is slow either way.) 

If ToCanonical can't be sped up on terms like this, is it at least possible to quickly estimate (very roughly) how long ToCanonical would take to simplify a given term? This would allow me to only apply ToCanonical to terms that can be quickly canonicalized, and leave the other terms uncanonicalized. Perhaps the number of elements in the two groups which Thomas mentions can be used to give such an estimate? 

Thanks again for your help!

Regards,
Petar

Leo Stein

unread,
Jun 5, 2023, 4:34:54 PM6/5/23
to Petar, Barry Wardell, xAct Tensor Computer Algebra
Dear Petar,

If I gather correctly, I think you're first expanding \sigma near the coincidence limit, and then taking derivatives of 1/\sigma, which of course introduces many terms. I think it would be simpler to use the abstract \sigma, \sigma_a, \sigma_{ab}, etc. in your calculation, and only after taking the derivatives, to perform the near-coincidence expansion of the quantities that remain (see e.g. Part I of https://arxiv.org/abs/1102.0529). This way you end up with very short expressions, e.g.
  g[a,b] CD[-a]@CD[-b][1/sigma[]]
will give
  2 g[a, b] CD[-a]@sigma[] CD[-b]@sigma[] / sigma[]^3 - g[a,b] CD[-a]@CD[-b]@sigma[] / sigma[]^2 .
This then simplifies using the result that \sigma_a \sigma^a = 2\sigma , and that \sigma_a^a = 4 - 1/3 R_{ab} \sigma^a \sigma^b + O(sigma^3), so there is a small cancellation down to
  R_{ab} \sigma^a \sigma^b / (3\sigma^2) + higher order.

I guess based on what you said, you want the near-coincidence limit of \sigma_{ab} valid to 4th order, so that box(1/sigma) is valid to 2nd order... is that correct? I think Barry Wardell has implemented near-coincidence expansions in terms of \sigma_a in xTensor... I CCed Barry on this email, hopefully he will let us know if that's true or not (otherwise use the rules in Sec. 6.1 of Poisson, Pound, Vega linked above).

Best
Leo

Thomas Bäckdahl

unread,
Jun 5, 2023, 4:47:52 PM6/5/23
to xa...@googlegroups.com
Hi!

I might need to think a bit more about the general procedure for this problem. What Leo wrote is probably a good way to do it.

However, I can say the following.
Converting Christoffel symbols to partial derivatives is probably the opposite of what you want to do.
What I would do is the following:
1) Convert all derivatives of the metric to Christoffel symbols.
2) Convert all derivatives of Christoffel symbols to curvature.
3) Contract all metrics.
4) Canonicalize (with metric).

There are two important things to think about here.
1) If you have partial derivatives in your expression then the canonicalization will be much more complicated, can take more time and there might be problems moving indices up and down.
2) If you contract the metrics before canonicalization, you will reduce the number of indices. With fewer indices, the canonicalization should be easier and quicker.

Unfortunately it is difficult to estimate how long time the canonicalization will take. Large size of the groups does usually not give very slow canonicalization. It is the combination of large groups and some structure (or bad luck) that gives slow canonicalization.

If the ideas above does not help to simplify things, one might need to think about irreducible and trace decompositions.
Unfortunately, this is a bit complicated to do in tensor language.
There is a new package that might be able to do some of that:
https://github.com/THelpin/xBrauer_Bundle

It might also be possible to do the calculations in spinor language. The SymManipulator and SymSpin packages can be used to do the irreducible and trace decompositions in that case. This will require some thought on how to set it up in the best way though.

Regards
Thomas

Thomas Bäckdahl

unread,
Jun 5, 2023, 5:03:42 PM6/5/23
to xa...@googlegroups.com
Hi again!

I tried another thing. You can modify the order the tensors are sorted before canonicalization.
This can greatly affect the speed of the group calculations.
For instance
xSortPrecedence[V] ^= 110
seems to speed up the calculation for the example in your image. I don't know if it is going to help for all terms though.

You can see how ToCanonical is sorting things with the function xSort.

Regards
Thomas
Reply all
Reply to author
Forward
0 new messages