Hello Henrik,
Yes there is a way, provided that:
- there is a common file system across all' compute nodes and the host where GC3Pie script is running;
- the mount points for these file systems are the same are the same across all nodes.
In your case, you should be fine if you run the GC3Pie driver script on the cluster head node. Instead, it would not work if you run GC3Pie on è.g. a laptop and SSH into the cluster.
The trick is simple: just omit input files from the application definition, and directly refer to input files by absolute path name in the arguments parameter:
Application(
arguments=['myprog', '/gpfs/foo/bar'],
inputs=[],
outputs=['baz.out'],
# ...
)
Note that the 'outputs=' parameter is still needed to avail of the usual GC3Pie mechanism for collecting output files in the location where the script is running. Should you want to keep also output files in A Gpfs directory, keep 'outputs=' rmpty and add a 'terminated()' method:
# in class MyApplication
def 'terminated(self):
shutil.move(self.execution.lrms_execdir + '/baz.out', '/gpfs/outfiles/')
I'm sorry I cannot provided a better worked out example now, but I'm travelling and I cannot only use the iPhone for processing emails.
Hope this helos!
Ciao,
R
--
You received this message because you are subscribed to the Google Groups "gc3pie" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gc3pie+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
To unsubscribe from this group and stop receiving emails from it, send an email to gc3pie+un...@googlegroups.com.