Parallelization of contractions

51 views
Skip to first unread message

Mattia Villani

unread,
Jan 22, 2020, 1:43:52 AM1/22/20
to sage-support
How to use parallelization on contraction of tensor? Consider the case I have two successive contraction like this:

Tud=etuu['^{ab}']*eamup['^c_b']
Tp=Tud['^{ab}']*eamup['^c_a']

How con I parallelize it?

Eric Gourgoulhon

unread,
Jan 22, 2020, 2:45:08 PM1/22/20
to sage-support
It suffices to type, before your code for contraction:

Parallelism
().set(nproc=8)

Then the calculus of the contactions will be parallelized on 8 processes.
Of course, you can adapt the value of nproc to your computer.
An example is here.

Best wishes,

Eric.


Mattia Villani

unread,
Jan 29, 2020, 2:47:32 AM1/29/20
to sage-support
I did as suggested adding the line

Parallelism
().set(nproc=4)

before the contraction, but it behaves strangely: it works for a short while with 4 processors, then with 3, then with 2 and finally it works for hours with a single processor.
It seems that no new packages are distributed between the processors.

Eric Gourgoulhon

unread,
Jan 30, 2020, 4:25:00 AM1/30/20
to sage-support
Hi,


Le mercredi 29 janvier 2020 08:47:32 UTC+1, Mattia Villani a écrit :
I did as suggested adding the line

Parallelism
().set(nproc=4)

before the contraction, but it behaves strangely: it works for a short while with 4 processors, then with 3, then with 2 and finally it works for hours with a single processor.
It seems that no new packages are distributed between the processors.



This  behaviour is not so strange: the parallelization is performed on pairs of indices and it could that for one pair of indices the computation takes much longer time. Since most of the computational time is spent in the simplification of symbolic expressions, you could try to use a light simplifying function via the method M.set_simplify_function, where M is your manifold. The fastest one would be M.set_simplify_function(simplify), but it might not be efficient. See
for examples.

Best wishes,

Eric.
Reply all
Reply to author
Forward
0 new messages