Apologies for cross posting, as I originally posted this question on stack overflow, but I feel that it was the wrong forum.
http://stackoverflow.com/questions/38023986/debugging-parallel-python-and-scoop-interactively
I have come from a background of debugging mpi4py code interactively such that the python instance experienced by each of the different CPUs is displayed in a different instance of an xterm window. Following the advice at the link below, I have been able to execute my code using a command such as $mpirun -np 4 xterm -e "ipython -i script.py", which I learned from the following source: debugging mpi4py interactively
Executing parallel python code in this manner means that if I insert break points with pdb.set_trace(), bugs in the code that relate to the context of a each single processor become very transparent, and this approach greatly facilitates monkey patching.
I have now moved from mpi4py to SCOOP, and I am wondering if there is any similar way that I can launch the python processors corresponding to different CPUs in different xterm instances? The reason for the switch is because now I am using a python module DEAP, which is designed to work well with SCOOP.
Also I am wondering if the Wakari IPcluster approach can be combined with SCOOP or mpi4pyas well?
Note: I added the tag ZeroMQ as I believe that SCOOP is built on top of ZeroMQ.
Thanks for any-help.
Regards
Russell.