Writing parallel code

129 views
Skip to first unread message

Andrew

unread,
May 10, 2016, 10:39:15 PM5/10/16
to sage-support
I have access to a centos machine with many cores, so I want to start writing parallelised code on it. This is probably naive, but is using the code in sage.parallel the best way to write parallel code in sage?

I started using the @parallel decorator but then I found the optional packages openmpi and mpi4py and wondered if it would be better to use these instead, except it seems that the packages are no longer maintained (and hence not useful?) as "new style" packages for them don't seem to exist. I am confused, however, because mp4py is the main topic of the thematic tutorial
http://doc.sagemath.org/html/en/thematic_tutorials/numerical_sage/parallel_computation.html
which suggests that it is still being used.

Andrew

William Stein

unread,
May 11, 2016, 12:00:20 AM5/11/16
to sage-support
On Tue, May 10, 2016 at 7:39 PM, Andrew <andrew...@sydney.edu.au> wrote:
> I have access to a centos machine with many cores, so I want to start
> writing parallelised code on it. This is probably naive, but is using the
> code in sage.parallel the best way to write parallel code in sage?

It depends on what you are trying to do. For some computations,
@parallel is the best possible way to do them, and for others it is
the worst possible way.

William

>
> I started using the @parallel decorator but then I found the optional
> packages openmpi and mpi4py and wondered if it would be better to use these
> instead, except it seems that the packages are no longer maintained (and
> hence not useful?) as "new style" packages for them don't seem to exist. I
> am confused, however, because mp4py is the main topic of the thematic
> tutorial
> http://doc.sagemath.org/html/en/thematic_tutorials/numerical_sage/parallel_computation.html
> which suggests that it is still being used.
>
> Andrew
>
> --
> You received this message because you are subscribed to the Google Groups
> "sage-support" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sage-support...@googlegroups.com.
> To post to this group, send email to sage-s...@googlegroups.com.
> Visit this group at https://groups.google.com/group/sage-support.
> For more options, visit https://groups.google.com/d/optout.



--
William (http://wstein.org)

Andrew

unread,
May 11, 2016, 12:32:17 AM5/11/16
to sage-support
On Wednesday, 11 May 2016 14:00:20 UTC+10, William wrote:

It depends on what you are trying to do.  For some computations,
@parallel is the best possible way to do them, and for others it is
the worst possible way.

Is there a good to learn about how best to do this (in sage/python/...)?

The first example that I care about looks something like this:
 
       mat=[[0 for s in xrange(tabs)] for t in xrange(tabs)]
       
for s in xrange(tabs):
           
for t in xrange(s,tabs):
               mat
[s][t]=self._inner_product_st(s,t)
               mat
[t][s]=mat[s][t]

The `_inner_product_st` method computes certain structure constants in a module. This method is time consuming and slightly recursive with "basic" cases being cached. Parallelising this loop seemed like the right place to me but, to be honest, I have no idea what I am doing!

Andrew
 

Vincent Delecroix

unread,
May 11, 2016, 12:53:38 AM5/11/16
to sage-s...@googlegroups.com
Hi Andrew,

If you use *one* machine with many cores then have a look at the
standard Python module multiprocessing

https://docs.python.org/2/library/multiprocessing.html

This is what parallel does use. And it is not hard to get it works.

If you have access to a cluster with several nodes, then you need
something else for communication between the nodes. mpi4py is one good
option. Though installing it might be tricky and using it requires more
attention. Installing the package is not advised, just use

sage -pip install mpi4py

Vincent

Vincent Delecroix

unread,
May 11, 2016, 12:56:27 AM5/11/16
to sage-s...@googlegroups.com
At this point it is not clear to me whether you need shared memory or
not between the processes. This can be an important overhead of
parallelization.

Andrew

unread,
May 11, 2016, 2:13:47 AM5/11/16
to sage-support
Thanks Vincent for both your answers. I am planning on using just one machine with up to 24 cores. (If I am not able to get far enough this way then I will think about using the cluster. Using 24 cores will already be a big step up compared with what I am currently doing.)

If the processes can share memory that would probably be good, but it is not essential. As this seems to require extra effort I'll try first to see how it goes without sharing.

Andrew
Message has been deleted

mmancini

unread,
May 11, 2016, 7:16:41 AM5/11/16
to sage-support
He Andrew,
I used @parallel in sage to parallelize some tensorial calculus (for example : src/sage/tensor/modules/comp.py) 
It is not very complicate to use it but sometimes you need to reorganize your computation.
The parallelism depends strictly on your code so the part of code you posted is not enough to understand how parallelize it.

The main actions you have to do are:
1) Create a function (with @parallel decoration) which will be used by your cores to computes a part of the computation :
ex: 
 @parallel(p_iter='multiprocessing',ncpus=24)
            
def paral_prod(myfunc,local_list_ind):
                
partial = []
                
# local_list_ind is a subset of set of "s" and "t" pairs
                for ind in local_list_ind : 
                    
# ind[0] is the "s", ind[1] is the "t" of your code                    
                    
partial.append([ind[0],ind[0],myfunc(ind[0],ind[0]))
                
return partial

Note that multiprocessing does not use really shared memory so you have to give to your function all the variable you need to use in it.

2) Then create a "list" of input (so of "tuple"): 
ex :
 listParalInput =[(self._inner_product_st,[[1,2],[1,3]]),(self._inner_product_st,[[2,3],[1,3]],.......) ]

In this example any tuple in the list will be send to a core. 

Then the first core will execute :
paral_prod(self._inner_product_st,[[1,2],[1,3]])

and so on for the other cores.

3) Now you can call the parallel function and put the results in "mat" 
 for ii,val in paral_prod(listParalInput):
               
for jj in val:
                    
# any jj in val contains : [ s ,t , self._inner_product_st(1,2)]
                    mat
[jj[0],jj[1]] = jj[2]
                    mat
[jj[1],jj[0]] = jj[2]

I have not tested this code but it should work (at least in principle).
It remains to create the listParalInput automatically (you have to do a partition of the indices of the triangular matrix).

I have also a little doubt concerning the use of self._inner_product_st as an argument, but you should test it.

Regards,
Marco
Reply all
Reply to author
Forward
0 new messages