Hi,
All it does (for now) is get tarballs with a particular naming format and use tar to extract specific files.
Running this using the standard profile in the config (the local executor) works fine. However, when I use the cluster profile to use the slurm executor, I get a strange error (included in the gist above), saying sbatch could not be used, I believe because a file, ".command.run" does not exist.
However, inspecting my work directory seems to indicate otherwise...
[bward@EI-HPC d2c0bfb1-c37a-4211-9493-86b15d4e773e]$ ls -a scratch/work/*/*
scratch/work/67/3fa5055bb67f6c92dd5fc03f93d6a0:
.
.. .command.begin .command.err .command.log .command.out
.command.run .command.sh .command.trace .exitcode
Robert_Davey_EI_RD_ENQ-3469_C_01_Batch_11_10_2021-Demultiplex_Barcodes_CCS_Analysis
Robert_Davey_EI_RD_ENQ-3469_C_01_Batch_11_10_2021-Demultiplex_Barcodes_CCS_Analysis.tar.gz
scratch/work/6d/a97531ebf595145049576f078ca246:
. .. .command.run .command.sh
In one directory, the local run, the symlinked inputs, outputs, and dotfiles are present. In the second directory, where slurm is used, there are not as many files, but .command.run is indeed present, and I can use sbatch on it myself. So I'm confused as to why I'm seeing this "file not found" error.
Can anyone help me with why I might be observing this, and what I can do to fix it?
Thanks,
Sab