args = ['FEMSolver', 'parameters.cfg'] pro3 = subprocess.Popen(args, stdout = subprocess.PIPE, stderr = subprocess.PIPE, cwd=path)
pro3.wait()
In general, I find that this sort of thing works fine under mpi4py, but may be affected by the configuration of mpiexec or the resource scheduler on an HPC cluster. Also, the example script assigns stdout and stderr to pipes, but does not read the pipes, which seems like an error (at least use communicate() instead of wait()).
mpiexec may produce warnings or errors when processes attempt to fork, and the resource manager on a computing cluster may automatically terminate processes that are launched beyond the number of processes allocated for the job.
Is there error output from the subprocess, mpiexec, or the job manager? What does the exit code of the subprocess indicate?
> On Jun 21, 2020, at 1:19 PM, zhan...@gmail.com wrote:
>
> Hi Pavel
>
> Has the problem been solved?
> If yes, could you tell me a solution?
>
> Thank you!
> Yi
>
> On Saturday, March 21, 2015 at 4:48:31 AM UTC+9, Pavel Ponomarev wrote:
> Hello,
>
> I can't get execution of external programs from an mpi process. I try to write a script to run embarrassingly parallel optimization routine on a cluster, when a million of independent computations are executed using several hundreds of MPI workers.
>
> Construction
> args = ['FEMSolver', 'parameters.cfg']
> pro3 = subprocess.Popen(args, stdout = subprocess.PIPE, stderr = subprocess.PIPE, cwd=path)
> pro3.wait()
> does not work. It just sends first line of stdout of my FEMSolver, terminates execution of the FEMSolver, and continues the script further, so I can't get results from my simulation.
>
> Do you know how to reliably execute an external program within an MPI-script? Any solutions or examples?
>
> --
> You received this message because you are subscribed to the Google Groups "mpi4py" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to mpi...@googlegroups.com.
I don't believe that mpiexec allows for recursive calls or subdivision of resources. It sounds like what happens is that you probably have 312 processes that are each launching 20 more, so you are running 6240 processes in a 312-core allocation.
You might accomplish what you want by launching program.py in a 3-node allocation while specifying to the outer mpiexec that it should use only ncores/task_size processes per node. https://www.open-mpi.org/doc/v3.0/man1/mpiexec.1.php
But, again, I don't think mpiexec supports nested calls, so it's probably not a good idea.
It doesn't appear to me that mpi4py.futures.MPIPoolExecutor is intended to support tasks that are themselves multi-process, but I'm not well acquainted with it or the MPI-2 dynamic process management features it relies on. If it was supported, I would think you would provide the number of task processes as an argument to `map` and not use `mpiexec` in the subprocess call. (Does MPI_Comm_spawn() allow a fixed number of child processes to share a new MPI_COMM_WORLD?)
You might also consider using a package that doesn't use MPI to coordinate MPI-based workloads, such as Parsl, or a workload management system based on pilot jobs.
> To view this discussion on the web visit https://groups.google.com/d/msgid/mpi4py/3f80a9fa-9c30-44a1-9776-e94e3a8eca60o%40googlegroups.com.
Unfortunately in a unit test I run the same command using subprocess.run it gives a non-zeros exit status without further info.
Dear All,I might potentially have a similar problem.Would you mind to have a look at: https://stackoverflow.com/questions/69767193/subprocess-run-fails-at-the-second-iteration and tell me if you have any idea of how I might solve this?