singularity not changing to current working directory

1,355 views
Skip to first unread message

Michael Yourshaw

unread,
Feb 11, 2018, 5:26:05 PM2/11/18
to singularity
I'm running a singularity image that contains java and other genetics applications. I use the Broad Institute Cromwell workflow engine running on a virtual machine to submit jobs to a Slurm-managed compute cluster.

My version of Singularity is a December 2017 patched version of 2.4.2 that fixed problems binding to our file system. We have named this patch 2.4.2c on our system. See Unable to bind directories on NFS filesystem, "permission denied" error even though I have permission #1205 for details.

Cromwell submits a job with a script that essentially:
    - sets the current working directory to an 'execution' directory
    - invokes singularity exec to execute a java application whose outputs are expected to go to the current working directory that was previously set.

Instead, output is going to the home directory of the user that is running the job.

The home directories have non-standard names, like `/homelink/cmoco_sys_dev` and actually are are symbloic links like `cmoco_sys_dev -> /cmoco_sys_dev/share/cmoco_sys_dev/nfs/cmoco_sys_dev`.

I have a `/storage` bind point in the image, which I bind to the top level directory with `-B /gpfs/share/cmoco_sys_dev/nfs/storage:/storage`, and the current working directory actually is mounted.

Here is a transcript of a manual replication of this problem:

```
cubipmcmp001:~$ cd /gpfs/share/cmoco_sys_dev/nfs/storage/cromwell/cromwell-executions/PairedEndSingleSampleWorkflow/8ae5051e-2950-42f8-bd07-6ad251077e06/call-CollectQualityYieldMetrics/shard-0/execution
cubipmcmp001:execution$ pwd
/gpfs/share/cmoco_sys_dev/nfs/storage/cromwell/cromwell-executions/PairedEndSingleSampleWorkflow/8ae5051e-2950-42f8-bd07-6ad251077e06/call-CollectQualityYieldMetrics/shard-0/execution
cubipmcmp001:execution$ /gpfs/software/singularity/singularity2.4.2c/bin/singularity shell -B /gpfs/share/cmoco_sys_dev/nfs/storage:/storage /gpfs/share/cmoco_sys_dev/nfs/storage/germline/applications/singularity/GATK.simg
Singularity: Invoking an interactive shell within container...

Singularity singularity_GATK_3.8-0_4.0.1.0_picard2.17.6_samtools1.7_jre8u162_python3.6.4_2018-02-10.simg:~> pwd
/homelink/yoursham
Singularity singularity_GATK_3.8-0_4.0.1.0_picard2.17.6_samtools1.7_jre8u162_python3.6.4_2018-02-10.simg:~> ls -l /storage/cromwell/cromwell-executions/PairedEndSingleSampleWorkflow/8ae5051e-2950-42f8-bd07-6ad251077e06/call-CollectQualityYieldMetrics/shard-0/execution/
total 32
-rw-rw---- 1 2001912691 ticr_cmoco_sys_dev     2 Feb 10 16:45 rc
-rw-r--r-- 1 2001912691 ticr_cmoco_sys_dev  2243 Feb 10 16:43 script
-rw-r--r-- 1 2001912691 ticr_cmoco_sys_dev  1045 Feb 10 16:43 script.submit
-rw-rw---- 1 2001912691 ticr_cmoco_sys_dev 13156 Feb 10 16:45 stderr
-rw-r--r-- 1 2001912691 ticr_cmoco_sys_dev   972 Feb 10 16:43 stderr.submit
-rw-rw---- 1 2001912691 ticr_cmoco_sys_dev     0 Feb 10 16:43 stdout
-rw-r--r-- 1 2001912691 ticr_cmoco_sys_dev    27 Feb 10 16:43 stdout.submit
drwxrwxrwx 2 2001912691 ticr_cmoco_sys_dev  4096 Feb 10 16:43 tmp.LToYy9
Singularity singularity_GATK_3.8-0_4.0.1.0_picard2.17.6_samtools1.7_jre8u162_python3.6.4_2018-02-10.simg:~> cd /storage/cromwell/cromwell-executions/PairedEndSingleSampleWorkflow/8ae5051e-2950-42f8-bd07-6ad251077e06/call-CollectQualityYieldMetrics/shard-0/execution/
Singularity singularity_GATK_3.8-0_4.0.1.0_picard2.17.6_samtools1.7_jre8u162_python3.6.4_2018-02-10.simg:/storage/cromwell/cromwell-executions/PairedEndSingleSampleWorkflow/8ae5051e-2950-42f8-bd07-6ad251077e06/call-CollectQualityYieldMetrics/shard-0/execution> ls -l
total 32
-rw-rw---- 1 2001912691 ticr_cmoco_sys_dev     2 Feb 10 16:45 rc
-rw-r--r-- 1 2001912691 ticr_cmoco_sys_dev  2243 Feb 10 16:43 script
-rw-r--r-- 1 2001912691 ticr_cmoco_sys_dev  1045 Feb 10 16:43 script.submit
-rw-rw---- 1 2001912691 ticr_cmoco_sys_dev 13156 Feb 10 16:45 stderr
-rw-r--r-- 1 2001912691 ticr_cmoco_sys_dev   972 Feb 10 16:43 stderr.submit
-rw-rw---- 1 2001912691 ticr_cmoco_sys_dev     0 Feb 10 16:43 stdout
-rw-r--r-- 1 2001912691 ticr_cmoco_sys_dev    27 Feb 10 16:43 stdout.submit
drwxrwxrwx 2 2001912691 ticr_cmoco_sys_dev  4096 Feb 10 16:43 tmp.LToYy9
```

v

unread,
Feb 11, 2018, 6:05:49 PM2/11/18
to singu...@lbl.gov
I can't comment on the setup for your cluster and if an entire mount isn't allowed, but for the directory you hit when you use the image, did you try running the command with --pwd to set it?

--
You received this message because you are subscribed to the Google Groups "singularity" group.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.



--
Vanessa Villamia Sochat
Stanford University '16

Michael Yourshaw

unread,
Feb 11, 2018, 6:29:13 PM2/11/18
to singularity
Using --pwd may not be feasible within the constraints of the pipeline, because the working directories are auto-generated.

However, I tried --pwd manually, and it failed:

cubipmcmp001:execution$ /gpfs/software/singularity/singularity2.4.2c/bin/singularity exec --pwd /gpfs/share/cmoco_sys_dev/nfs/storage/cromwell/cromwell-executions/PairedEndSingleSampleWorkflow/28b31fed-49f0-4b82-80c3-2746f21c84fe/call-CollectQualityYieldMetrics/shard-0/execution -B /homelink:/homelink -B /gpfs/share/cmoco_sys_dev/nfs/storage:/storage /gpfs/share/cmoco_sys_dev/nfs/storage/germline/applications/singularity/VEP.simg pwd
ERROR  : Could not change directory to: /gpfs/share/cmoco_sys_dev/nfs/storage/cromwell/cromwell-executions/PairedEndSingleSampleWorkflow/28b31fed-49f0-4b82-80c3-2746f21c84fe/call-CollectQualityYieldMetrics/shard-0/execution
ABORT  : Retval = 255


I also experimented with adding `-B /homelink` to the command, but no /homelink defined in the image. That caused the container's cwd to be `/` (root).

When I created a /homelink in the image and used `-B /homelink:/homelink`, I got "WARNING: Could not chdir to home: /homelink/yoursham" and the cwd was `/`.
To unsubscribe from this group and stop receiving emails from it, send an email to singularity...@lbl.gov.

v

unread,
Feb 11, 2018, 7:02:29 PM2/11/18
to singu...@lbl.gov
It looks like the issue is that it can't change to the gpfs location to begin with? Could you post the whole output of --debug to the list (thanks for the nice formatting too!) so we can trace the entire logic? If it's NFS I think we've had issues like this in the past.

To unsubscribe from this group and stop receiving emails from it, send an email to singularity+unsubscribe@lbl.gov.
Reply all
Reply to author
Forward
0 new messages