Error: Directory cannot be locked. Please make sure that no other Snakemake process is ...

2,838 views
Skip to first unread message

SBS DOULFOOX

unread,
Jun 18, 2019, 2:28:44 PM6/18/19
to pigx
Hello,

I'm trying to run  pigx-rnaseq on around 300 GB of fastq files and 9 DEseq analyses; and I need help with the following error message that I do not undersand:
 
Commencing snakemake run submission to cluster
Building DAG of jobs...
Error: Directory cannot be locked. Please make sure that no other Snakemake process is trying to create the same files in the following directory:
/fast/AG_Zinzen/Konstantinos_Papadakis_onFAST/rnaSeqOfAlisData/results
If you are sure that no other instances of snakemake are running on this directory, the remaining lock was likely caused by a kill signal or a power loss. It can be removed with the --unlock argument.



To run the pigx-rnaseq pipeline I use the following script:

#$ -V
#$ -pe smp 10
#$ -l h_vmem=20G,os=centos7,data       
#$ -e /fast/AG_Zinzen/Konstantinos_Papadakis_onFAST/rnaSeqOfAlisData/results/
#$ -o /fast/AG_Zinzen/Konstantinos_Papadakis_onFAST/rnaSeqOfAlisData/results/

cd /fast/AG_Zinzen/Konstantinos_Papadakis_onFAST/rnaSeqOfAlisData/
export PATH="/home/kpapadak/.guix-profile/bin${PATH:+:}$PATH"
pigx-rnaseq -s usethesesettings.yaml samplesheet_usethis.csv

my settings.yaml is this:

locations:
  reads-dir: /fast/AG_Zinzen/AliMcCorkindale_RNASeq_processingCopy
  output-dir: /fast/AG_Zinzen/Konstantinos_Papadakis_onFAST/rnaSeqOfAlisData/results
  genome-fasta: /fast/AG_Zinzen/Konstantinos_Papadakis_onFAST/rnaSeqOfAlisData/AnotationsFromAlisa/dna/Drosophila_melanogaster.BDGP6.22.dna.toplevel.fa
  cdna-fasta: /fast/AG_Zinzen/Konstantinos_Papadakis_onFAST/rnaSeqOfAlisData/AnotationsFromAlisa/cdna/Drosophila_melanogaster.BDGP6.22.cdna.all.fa
  gtf-file: /fast/AG_Zinzen/Konstantinos_Papadakis_onFAST/rnaSeqOfAlisData/AnotationsFromAlisa/GTF/Drosophila_melanogaster.BDGP6.22.96.chr.gtf

organism: dmelanogaster

DEanalyses:

  IC_vs_VC:
    case_sample_groups: 'ind'
    control_sample_groups: 'vnd'
    covariates: 'time'

  IC:
    case_sample_groups: 'ind'
    control_sample_groups: 'ind_ctrl'
    covariates: 'time'
 
  VC:
    case_sample_groups: 'vnd'
    control_sample_groups: 'vnd_ctrl'
    covariates: 'time'
 
  NB:
    case_sample_groups: 'pros'
    control_sample_groups: 'pros_ctrl'
    covariates: 'time'
 
  N:
    case_sample_groups: 'elav'
    control_sample_groups: 'elav_ctrl'
    covariates: 'time'
 
  GLIA:
    case_sample_groups: 'repo'
    control_sample_groups: 'repo_ctrl'
    covariates: 'time'
 
  NB_vs_N:
    case_sample_groups: 'pros'
    control_sample_groups: 'elav'
    covariates: 'time'
 
  NB_vs_GLIA:
    case_sample_groups: 'pros'
    control_sample_groups: 'repo'
    covariates: 'time'
 
  N_vs_GLIA:
    case_sample_groups: 'elav'
    control_sample_groups: 'repo'
    covariates: 'time'
 
 

execution:
  submit-to-cluster: yes
  jobs: 9 
  rules:
    __default__:
      threads: 1
      memory: 8G
    star_index:
      threads: 8
      memory: 4G
    salmon_index:
      threads: 8
      memory: 4G
    salmon_quant:
      threads: 8
      memory: 5G
    star_map:
      threads: 8
      memory: 20G

and my sample-sheet.csv is this:

name,reads,reads2,sample_type,time,replicate,S,L
18-22h_Elav-neg_1_S39_L007,18-22h_Elav-neg_1_S39_L007_R1_001.fastq.gz,18-22h_Elav-neg_1_S39_L007_R2_001.fastq.gz,elav_ctrl,18-22,1,S39_L007,L007
18-22h_Elav-neg_2_S40_L007,18-22h_Elav-neg_2_S40_L007_R1_001.fastq.gz,18-22h_Elav-neg_2_S40_L007_R2_001.fastq.gz,elav_ctrl,18-22,2,S40_L007,L007
18-22h_Elav-pos_1_S41_L007,18-22h_Elav-pos_1_S41_L007_R1_001.fastq.gz,18-22h_Elav-pos_1_S41_L007_R2_001.fastq.gz,elav,18-22,1,S41_L007,L007
18-22h_Elav-pos_2_S42_L007,18-22h_Elav-pos_2_S42_L007_R1_001.fastq.gz,18-22h_Elav-pos_2_S42_L007_R2_001.fastq.gz,elav,18-22,2,S42_L007,L007
18-22h_Repo-neg_1_S19_L004,18-22h_Repo-neg_1_S19_L004_R1_001.fastq.gz,18-22h_Repo-neg_1_S19_L004_R2_001.fastq.gz,repo_ctrl,18-22,1,S19_L004,L004
18-22h_Repo-neg_2_S5_L001,18-22h_Repo-neg_2_S5_L001_R1_001.fastq.gz,18-22h_Repo-neg_2_S5_L001_R2_001.fastq.gz,repo_ctrl,18-22,2,S5_L001,L001
18-22h_Repo-pos_1_S20_L004,18-22h_Repo-pos_1_S20_L004_R1_001.fastq.gz,18-22h_Repo-pos_1_S20_L004_R2_001.fastq.gz,repo,18-22,1,S20_L004,L004
18-22h_Repo-pos_2_S6_L001,18-22h_Repo-pos_2_S6_L001_R1_001.fastq.gz,18-22h_Repo-pos_2_S6_L001_R2_001.fastq.gz,repo,18-22,2,S6_L001,L001
4-6h_Pros-neg_1_S21_L004,4-6h_Pros-neg_1_S21_L004_R1_001.fastq.gz,4-6h_Pros-neg_1_S21_L004_R2_001.fastq.gz,pros_ctrl,4-6,1,S21_L004,L004
4-6h_Pros-neg_2_S22_L004,4-6h_Pros-neg_2_S22_L004_R1_001.fastq.gz,4-6h_Pros-neg_2_S22_L004_R2_001.fastq.gz,pros_ctrl,4-6,2,S22_L004,L004
4-6h_Pros-pos_1_S23_L004,4-6h_Pros-pos_1_S23_L004_R1_001.fastq.gz,4-6h_Pros-pos_1_S23_L004_R2_001.fastq.gz,pros,4-6,1,S23_L004,L004
4-6h_Pros-pos_2_S24_L004,4-6h_Pros-pos_2_S24_L004_R1_001.fastq.gz,4-6h_Pros-pos_2_S24_L004_R2_001.fastq.gz,pros,4-6,2,S24_L004,L004
4-6h_Vnd-neg_1_S13_L003,4-6h_Vnd-neg_1_S13_L003_R1_001.fastq.gz,4-6h_Vnd-neg_1_S13_L003_R2_001.fastq.gz,vnd_ctrl,4-6,1,S13_L003,L003
4-6h_Vnd-neg_2_S14_L003,4-6h_Vnd-neg_2_S14_L003_R1_001.fastq.gz,4-6h_Vnd-neg_2_S14_L003_R2_001.fastq.gz,vnd_ctrl,4-6,2,S14_L003,L003
4-6h_Vnd-pos_1_S15_L003,4-6h_Vnd-pos_1_S15_L003_R1_001.fastq.gz,4-6h_Vnd-pos_1_S15_L003_R2_001.fastq.gz,vnd,4-6,1,S15_L003,L003
4-6h_Vnd-pos_2_S16_L003,4-6h_Vnd-pos_2_S16_L003_R1_001.fastq.gz,4-6h_Vnd-pos_2_S16_L003_R2_001.fastq.gz,vnd,4-6,2,S16_L003,L003
6-8h_Elav-neg_1_S29_L005,6-8h_Elav-neg_1_S29_L005_R1_001.fastq.gz,6-8h_Elav-neg_1_S29_L005_R2_001.fastq.gz,elav_ctrl,6-8,1,S29_L005,L005
6-8h_Elav-neg_2_S30_L005,6-8h_Elav-neg_2_S30_L005_R1_001.fastq.gz,6-8h_Elav-neg_2_S30_L005_R2_001.fastq.gz,elav_ctrl,6-8,2,S30_L005,L005
6-8h_Elav-pos_1_S31_L006,6-8h_Elav-pos_1_S31_L006_R1_001.fastq.gz,6-8h_Elav-pos_1_S31_L006_R2_001.fastq.gz,elav,6-8,1,S31_L006,L006
6-8h_Elav-pos_2_S32_L006,6-8h_Elav-pos_2_S32_L006_R1_001.fastq.gz,6-8h_Elav-pos_2_S32_L006_R2_001.fastq.gz,elav,6-8,2,S32_L006,L006
6-8h_Ind-neg_1_S7_L002,6-8h_Ind-neg_1_S7_L002_R1_001.fastq.gz,6-8h_Ind-neg_1_S7_L002_R2_001.fastq.gz,ind_ctrl,6-8,1,S7_L002,L002
6-8h_Ind-neg_2_S8_L002,6-8h_Ind-neg_2_S8_L002_R1_001.fastq.gz,6-8h_Ind-neg_2_S8_L002_R2_001.fastq.gz,ind_ctrl,6-8,2,S8_L002,L002
6-8h_Ind-pos_1_S9_L002,6-8h_Ind-pos_1_S9_L002_R1_001.fastq.gz,6-8h_Ind-pos_1_S9_L002_R2_001.fastq.gz,ind,6-8,1,S9_L002,L002
6-8h_Ind-pos_2_S10_L002,6-8h_Ind-pos_2_S10_L002_R1_001.fastq.gz,6-8h_Ind-pos_2_S10_L002_R2_001.fastq.gz,ind,6-8,2,S10_L002,L002
6-8h_Pros-neg_1_S25_L005,6-8h_Pros-neg_1_S25_L005_R1_001.fastq.gz,6-8h_Pros-neg_1_S25_L005_R2_001.fastq.gz,pros_ctrl,6-8,1,S25_L005,L005
6-8h_Pros-neg_2_S26_L005,6-8h_Pros-neg_2_S26_L005_R1_001.fastq.gz,6-8h_Pros-neg_2_S26_L005_R2_001.fastq.gz,pros_ctrl,6-8,2,S26_L005,L005
6-8h_Pros-pos_1_S27_L005,6-8h_Pros-pos_1_S27_L005_R1_001.fastq.gz,6-8h_Pros-pos_1_S27_L005_R2_001.fastq.gz,pros,6-8,1,S27_L005,L005
6-8h_Pros-pos_2_S28_L005,6-8h_Pros-pos_2_S28_L005_R1_001.fastq.gz,6-8h_Pros-pos_2_S28_L005_R2_001.fastq.gz,pros,6-8,2,S28_L005,L005
6-8h_Repo-neg_1_S43_L008,6-8h_Repo-neg_1_S43_L008_R1_001.fastq.gz,6-8h_Repo-neg_1_S43_L008_R2_001.fastq.gz,repo_ctrl,6-8,1,S43_L008,L008
6-8h_Repo-neg_2_S44_L008,6-8h_Repo-neg_2_S44_L008_R1_001.fastq.gz,6-8h_Repo-neg_2_S44_L008_R2_001.fastq.gz,repo_ctrl,6-8,2,S44_L008,L008
6-8h_Repo-pos_1_S45_L008,6-8h_Repo-pos_1_S45_L008_R1_001.fastq.gz,6-8h_Repo-pos_1_S45_L008_R2_001.fastq.gz,repo,6-8,1,S45_L008,L008
6-8h_Repo-pos_2_S46_L008,6-8h_Repo-pos_2_S46_L008_R1_001.fastq.gz,6-8h_Repo-pos_2_S46_L008_R2_001.fastq.gz,repo,6-8,2,S46_L008,L008
6-8h_Vnd-neg_1_S17_L003,6-8h_Vnd-neg_1_S17_L003_R1_001.fastq.gz,6-8h_Vnd-neg_1_S17_L003_R2_001.fastq.gz,vnd_ctrl,6-8,1,S17_L003,L003
6-8h_Vnd-neg_2_S18_L003,6-8h_Vnd-neg_2_S18_L003_R1_001.fastq.gz,6-8h_Vnd-neg_2_S18_L003_R2_001.fastq.gz,vnd_ctrl,6-8,2,S18_L003,L003
6-8h_Vnd-pos_1_S11_L002,6-8h_Vnd-pos_1_S11_L002_R1_001.fastq.gz,6-8h_Vnd-pos_1_S11_L002_R2_001.fastq.gz,vnd,6-8,1,S11_L002,L002
6-8h_Vnd-pos_2_S12_L002,6-8h_Vnd-pos_2_S12_L002_R1_001.fastq.gz,6-8h_Vnd-pos_2_S12_L002_R2_001.fastq.gz,vnd,6-8,2,S12_L002,L002
8-10h_Elav-neg_1_S33_L006,8-10h_Elav-neg_1_S33_L006_R1_001.fastq.gz,8-10h_Elav-neg_1_S33_L006_R2_001.fastq.gz,elav_ctrl,8-10,1,S33_L006,L006
8-10h_Elav-neg_2_S34_L006,8-10h_Elav-neg_2_S34_L006_R1_001.fastq.gz,8-10h_Elav-neg_2_S34_L006_R2_001.fastq.gz,elav_ctrl,8-10,2,S34_L006,L006
8-10h_Elav-pos_1_S37_L007,8-10h_Elav-pos_1_S37_L007_R1_001.fastq.gz,8-10h_Elav-pos_1_S37_L007_R2_001.fastq.gz,elav,8-10,1,S37_L007,L007
8-10h_Elav-pos_2_S38_L007,8-10h_Elav-pos_2_S38_L007_R1_001.fastq.gz,8-10h_Elav-pos_2_S38_L007_R2_001.fastq.gz,elav,8-10,2,S38_L007,L007
8-10h_Repo-neg_1_S47_L008,8-10h_Repo-neg_1_S47_L008_R1_001.fastq.gz,8-10h_Repo-neg_1_S47_L008_R2_001.fastq.gz,repo_ctrl,8-10,1,S47_L008,L008
8-10h_Repo-neg_2_S48_L008,8-10h_Repo-neg_2_S48_L008_R1_001.fastq.gz,8-10h_Repo-neg_2_S48_L008_R2_001.fastq.gz,repo_ctrl,8-10,2,S48_L008,L008
8-10h_Repo-pos_1_S35_L006,8-10h_Repo-pos_1_S35_L006_R1_001.fastq.gz,8-10h_Repo-pos_1_S35_L006_R2_001.fastq.gz,repo,8-10,1,S35_L006,L006
8-10h_Repo-pos_2_S36_L006,8-10h_Repo-pos_2_S36_L006_R1_001.fastq.gz,8-10h_Repo-pos_2_S36_L006_R2_001.fastq.gz,repo,8-10,2,S36_L006,L006
Ind_4-6h_rep3_neg,Ind_4-6h_rep3_neg_R1.fastq.gz,Ind_4-6h_rep3_neg_R2.fastq.gz,ind_ctrl,4-6,1,S_46_ind,L_46_ind
Ind_4-6h_rep3_pos,Ind_4-6h_rep3_pos_R1.fastq.gz,Ind_4-6h_rep3_pos_R2.fastq.gz,ind,4-6,2,S_46_ind,L_46_ind
Ind_4-6h_rep4_neg,Ind_4-6h_rep4_neg_R1.fastq.gz,Ind_4-6h_rep4_neg_R2.fastq.gz,ind_ctrl,4-6,1,S_46_ind,L_46_ind
Ind_4-6h_rep4_pos,Ind_4-6h_rep4_pos_R1.fastq.gz,Ind_4-6h_rep4_pos_R2.fastq.gz,ind,4-6,2,S_46_ind,L_46_ind

Any help is much appreciated,
Best,
Konsta

Ricardo Wurmus

unread,
Jun 18, 2019, 5:12:58 PM6/18/19
to SBS DOULFOOX, pigx

Hi,

>> Error: Directory cannot be locked. Please make sure that no other
>> Snakemake process is trying to create the same files in the following
>> directory:
>> /fast/AG_Zinzen/Konstantinos_Papadakis_onFAST/rnaSeqOfAlisData/results
>> If you are sure that no other instances of snakemake are running on this
>> directory, the remaining lock was likely caused by a kill signal or a power
>> loss. It can be removed with the --unlock argument.

Have you tried unlocking by running pigx with the “--unlock” argument as
suggested in this message before retrying to run the pipeline?

--
Ricardo

Konsta

unread,
Jun 19, 2019, 2:38:22 AM6/19/19
to pigx
Hi Ricardo and thanks for the swift reply,


Have you tried unlocking by running pigx with the “--unlock” argument as
suggested in this message before retrying to run the pipeline?

I did not try. instead I did a --dry run
pigx-rnaseq -s usethesesettings.yaml samplesheet_usethis.csv --dry
followed by the initial command and now the calculation seems to be running i.e. reads are being trimmed.

yesterday I googled the error message and had the impression that I might be using an unsupported version of python...

So if I understand correct a directory was locked and inaccessible for my command because the algorithm was not sure if another instance of the pipeline was already running ?
And I should have just tried
pigx-rnaseq -s usethesesettings.yaml samplesheet_usethis.csv --unlock 
to recover after a snakecrush ?

Best,
Konsta

 

B. Osberg

unread,
Jun 21, 2019, 4:47:13 AM6/21/19
to SBS DOULFOOX, pi...@googlegroups.com

Hello Konsta

This happens when a previous run of the pipeline was terminated unexpectedly. The directory remains "locked" against another instance starting. I recommend running the command once again with the "--unlock" option. That is,

pigx-rnaseq -s usethesesettings.yaml samplesheet_usethis.csv --unlock

And then try submitting again, as usual (without "--unlock")

regards,

-Bren

--
You received this message because you are subscribed to the Google Groups "pigx" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pigx+uns...@googlegroups.com.
To post to this group, send email to pi...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/pigx/e15af94f-cbc2-4c1b-ac00-e073609bacfc%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
pEpkey.asc

Ricardo Wurmus

unread,
Jun 21, 2019, 4:47:13 AM6/21/19
to Konsta, pigx

Hi,

> yesterday I googled the error message and had the impression that I might
> be using an unsupported version of python...

Unlikely. If you’re using PiGx from Guix you needn’t worry about
unsupported versions.

> So if I understand correct a directory was locked and inaccessible for my
> command because the algorithm was not sure if another instance of the
> pipeline was already running ?
> And I should have just tried
>
>> pigx-rnaseq -s usethesesettings.yaml samplesheet_usethis.csv --unlock
>>
> to recover after a snakecrush ?

Snakemake (the workflow framework we happen to use for PiGx) has a
directory to keep track of the execution state of the pipeline. When it
crashes or is killed that directory isn’t always cleaned up, so on the
next run Snakemake would assume that it’s already / still running and
refuses to run again.

Using “--unlock” you can reset that state and assure Snakemake that it’s
fine to give it another try.

--
Ricardo
Reply all
Reply to author
Forward
0 new messages