Thank you very very much for this outstanding answer!
Points 1) & 2) allow me to compute the RicciScalar in about 10 minutes, however, it does not use all available CPU cores (there is always 1 core at 100%, a second around 60% and maybe 4-5 cores around 10%, all others are close to 0% load (out of 16 physical cores / 32 logical cores)). Is there a global option I could set, to use parallelisation whenever possible, i.e. that for every command that knows about a Parallelize flag, it is set to true?
The notebook you attached is awesome. It computes in no time. This approach is way faster - thank you very much for writing this. I have three follow-up questions:
1) Is there a reason for loading xTensor first and then xPert and xCoba? Wouldn't xPert load xTensor anyway?
2) Is it possible to set rules containing derivatives in xCoba? The metric I use contains divergence-free vectors, so I'd like to have a rule like
MakeRule[{-Derivative[0, 1, 0, 0][B1][t[], x[], y[], z[]], Derivative[0, 0, 1, 0][B2][t[], x[], y[], z[]] + Derivative[0, 0, 0, 1][B3][t[], x[], y[], z[]]}]
but like this, higher derivatives are not cancelled, e.g. a derivative with respect to time on top of the divergence of this vector is not set to zero. Can rules be generalised, maybe with placeholders inside the Derivative[ ] part?
3) I applied the expansion to some other expressions (equations of motion). The tensorial EoM to first order fails, because Simplify takes more than 300 seconds. How can I give it more time, when Simplify is part of Collect[ ]? Or is there even a more suitable command, when a tensorial expression has to be evaluated? I attached a notebook, where the very last output deals with this. I assume that the time component / zero component, will look different than the spatial components, but all the spatial components will look "the same"—can I somehow use this to have the output "look nice"?
Again, thank you very much for your answers.
PS: I had to compress the notebook, maybe using a "nice" formatting blew up the filesize. Sorry for that.