SCOOP Cluster Configuration

Skip to first unread message

Robin Müller-Bady

Feb 3, 2015, 4:44:36 AM2/3/15
Dear all,

I'm currently using SCOOP (SCOOP 0.7.1 release on linux2 using Python 2.7.8, API: 1013) for distributing my work with deap in a local network on 2 machines.
Currently, the distribution of computation only works in case that
a) The path of the script where the root process is invoked exists on the second machine and
b) If the script exists on exactly this location.
In my case, this is possible as my usernames on both machines are the same, so creating a directory structure like that is possible but is this the preferred way?
As an example, I execute the script "/home/robin/workspace/deap/"

python -m scoop
runs fine on my local machine using 8 cores.

python -m scoop -host localhost [host2]
fails in case that [host2]:/home/robin/workspace/deap/ does not exist.
It runs without error in case the directory is created without content, but no remote process is spawned (even if scoop tells me that he did so!).
In this case, no python process is running on the remote machine.

python -m scoop -host localhost [host2]
runs fine if the directory [host2]:/home/robin/workspace/deap/ exists and the script [host2]:/home/robin/workspace/deap/ exists as well.
The workers are spawned and the machine is really working.

Am I missing a point here or did I just got it wrong? Why is my script + path necessary on the remote host? For my understanding, there is just a remote python instance spawned. Even if the script is necessary on the target machine, wouldn't it be good to distribute it into the different /tmp directories of the remote machine before executing via the ssh connection?

Best regards,


J Mrazek

Nov 1, 2016, 1:44:57 AM11/1/16
to scoop-users
I also have to have the same python installation at /home/user1/anaconda/bin/python ... How is it with remote directories? Any chance to change?
Reply all
Reply to author
0 new messages