On 1 June 2012 15:37, thomas hisch <
t.h...@gmail.com> wrote:
>
>
> On Friday, June 1, 2012 5:04:07 PM UTC+2, Lisandro Dalcin wrote:
>>
>> On 31 May 2012 18:19, thomas hisch <
t.h...@gmail.com> wrote:
>> > Hello,
>> >
>> > where can I find parallel (MPI) versions of (some) of the
>> > petsc4py/slepc4py
>> > examples found in the respective source tarballs?
>> >
>>
>> Generally speaking, parallelism with PETSc reduces to filling-in
>> matrices and vectors in parallel, setting at each processor the
>> Mat/Vec values. To do this effectively, you previously need some sort
>> of partitioning to assign degrees of freedom to processors. For
>> example, this one solves a nonlinear problem with matrix-free using
>> petsc4py, the partitioning is managed with a DA object (structured
>> grid):
>>
http://code.google.com/p/petsc4py/source/browse/demo/bratu3d/bratu3d.py
>>
>> All the examples in slepc4py tarball are parallel (though really
>> simple), you just have to run "mpiexec -n 5 python ex2.py".
>
>
> Thx, mpiexec/mpirun in the slepc4py demo dir seems to work fine. Yesterday I
> only tested the poisson2d example in petsc4py with mpirun -n 4 python
> poisson2d.py which crashed due to a indexing error. I grepped the demo dirs
> of both slepc4py and petsc4py for "mpi4py", "DECIDE" (I'm familiar with the
> C/C++ PETSC/SLEPC API) but didn't find a match. Therefore I expected that
> all the demos are not parallelized.
>
That's right, not all the demos in petsc4py are parallel.
>>
>>
>> Could you provide some additional background on what you are looking for?
>
>
> I want to solve Hermitian and non-Hermitian Schrödinger and Helmholtz type
> problems using slepc4py. I have already created the code for solving the
> helmholtz-problem in C++ using slepc. In this code I rely on PETSC_DECIDE
> for partitioning my matrices for parallel usage. How is this partioning done
> in petsc4py ?
>
Suppose M and N are de global row and column sizes, then you just do:
from petsc4py import PETSc
A = PETSc.Mat().create()
A.setType('aij')
A.setSizes([M,N])
A.setPreallocationNNZ([diag_nz, offdiag_nz]) # optional
A.setUp()
and you are done, petsc4py will pass DECIDE to PETSc.
BTW, if you already have fast C++ code to fill your matrix in
parallel, reusing it would be a good idea, as Python will be rather
slow for this. Look at demos/wrap-{cython}swig} to see how this can be
done.