Jobs exiting in juicer script

381 views
Skip to first unread message

Kush B.

unread,
Nov 1, 2016, 7:43:21 AM11/1/16
to 3D Genomics
Hi,

I am trying to run Juicer script but during alignment step it makes sam file upto some level and then individual job exits. It looks like this-

------------------------------------------------------------
Sender: LSF System <lsfa...@ottavino000-215.orchestra>
Subject: Job 4535172: <a1477977860_align2WT_1001.fastq> Exited

Job <a1477977860_align2WT_1001.fastq> was submitted from host <clarinet002-061.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <ottavino000-215.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_sh> was used as the working directory.
Started at Tue Nov  1 03:57:01 2016
Results reported at Tue Nov  1 05:58:05 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
#!/bin/bash
#BSUB -q priority
#BSUB -o /groups/cbdm-db/kb124/FC_02524_sh/lsf.out
        #BSUB -W 1200
#BSUB -R "rusage[mem=128000]"
#BSUB -J "a1477977860_align2WT_1001.fastq"
# Align read2
if [ -n "" ] || [ "0" -eq 2 ]
then
           echo 'Running command bwa aln -q 15 /groups/shared_databases/igenome/Mus_musculus/UCSC/mm10/Sequence/BWAIndex/genome.fa /groups/cbdm-db/kb124/FC_02524_sh/splits/WT_1_R2001.fastq > /groups/cbdm-db/kb124/FC_02524_sh/splits/WT_1_R2001.fastq.sai && bwa samse /groups/shared_databases/igenome/Mus_musculus/UCSC/mm10/Sequence/BWAIndex/genome.fa /groups/cbdm-db/kb124/FC_02524_sh/splits/WT_1_R2001.fastq.sai /groups/cbdm-db/kb124/FC_02524_sh/splits/WT_1_R2001.fastq > /groups/cbdm-db/kb124/FC_02524_sh/splits/WT_1_R2001.fastq.sam '
   bwa aln -q 15 /groups/shared_databases/igenome/Mus_musculus/UCSC/mm10/Sequence/BWAIndex/genome.fa /groups/cbdm-db/kb124/FC_02524_sh/splits/WT_1_R2001.fastq > /groups/cbdm-db/kb124/FC_02524_sh/splits/WT_1_R2001.fastq.sai && bwa samse /groups/shared_databases/igenome/Mus_musculus/UCSC/mm10/Sequence/BWAIndex/genome.fa /groups/cbdm-db/kb124/FC_02524_sh/splits/WT_1_R2001.fastq.sai /groups/cbdm-db/kb124/FC_02524_sh/splits/WT_1_R2001.fastq > /groups/cbdm-db/kb124/FC_02524_sh/splits/WT_1_R2001.fastq.sam
   if [ $? -ne 0 ]
   then 
              echo "Alignment of /groups/cbdm-db/kb124/FC_02524_sh/splits/WT_1_R2001.fastq failed. Check /groups/cbdm-db/kb124/FC_02524_sh/lsf.out for results"
      exit 100
   else
      echo "(-: Short align of /groups/cbdm-db/kb124/FC_02524_sh/splits/WT_1_R2001.fastq.sam done successfully"
   fi

else
          echo 'Running command bwa mem -t 16 /groups/shared_databases/igenome/Mus_musculus/UCSC/mm10/Sequence/BWAIndex/genome.fa /groups/cbdm-db/kb124/FC_02524_sh/splits/WT_1_R2001.fastq > /groups/cbdm-db/kb124/FC_02524_sh/splits/WT_1_R2001.fastq.sam'
  bwa mem -t 16 /groups/shared_databases/igenome/Mus_musculus/UCSC/mm10/Sequence/BWAIndex/genome.fa /groups/cbdm-db/kb124/FC_02524_sh/splits/WT_1_R2001.fastq > /groups/cbdm-db/kb124/FC_02524_sh/splits/WT_1_R2001.fastq.sam
  if [ $? -ne 0 ]
  then 
     exit 100
  else
     echo "(-: Mem align of /groups/cbdm-db/kb124/FC_02524_sh/splits/WT_1_R2001.fastq.sam done successfully"
  fi
       fi

------------------------------------------------------------

TERM_CPULIMIT: job killed after reaching LSF CPU usage limit.
Exited with signal termination: CPU time limit exceeded.

Resource usage summary:

    CPU time   :  73183.00 sec.
    Max Memory :     41835 MB
    Max Swap   :     54183 MB

    Max Processes  :         4
    Max Threads    :        21

The output (if any) is above this job summary.

-----------------------------------------------------------------------------

I have changed allocated memory (alloc_mem) in Jucier.sh to 128000 but still no success. Do you think increasing the number of cores will help? I can not see any option to change number of cores in Juicer.sh?

Thanks!

KB 

Neva Durand

unread,
Nov 1, 2016, 12:56:37 PM11/1/16
to Kush B., 3D Genomics
It looks like it went over the time limit for that queue.  What split size did you use?  I'm surprised that the BWA job is using so much memory and so much time.  It also looks like you're doing short read alignment, is that correct?  BWA can take a number of threads via the "-t" flag.  To ask for multiple cores, you would add a BSUB header line with "-n" <# of cores>

Best
Neva

On Tue, Nov 1, 2016 at 12:43 PM, Kush B. <kushag...@gmail.com> wrote:
Hi,

I am trying to run Juicer script but during alignment step it makes sam file upto some level and then individual job exits. It looks like this-

------------------------------------------------------------
Sender: LSF System <lsfadmin@ottavino000-215.orchestra>

--
You received this message because you are subscribed to the Google Groups "3D Genomics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to 3d-genomics+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/3d-genomics/c43725f5-eeda-479c-bbd2-16cebcf64962%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.



--
Neva Cherniavsky Durand, Ph.D.
Staff Scientist, Aiden Lab

Neva Durand

unread,
Nov 1, 2016, 12:57:44 PM11/1/16
to Kush B., 3D Genomics
Ah, never mind what I said about short read alignment.

It looks like you're calling bwa mem with 16 threads; you should add that to the #BSUB lines.

On Tue, Nov 1, 2016 at 5:56 PM, Neva Durand <ne...@broadinstitute.org> wrote:
It looks like it went over the time limit for that queue.  What split size did you use?  I'm surprised that the BWA job is using so much memory and so much time.  It also looks like you're doing short read alignment, is that correct?  BWA can take a number of threads via the "-t" flag.  To ask for multiple cores, you would add a BSUB header line with "-n" <# of cores>

Best
Neva
On Tue, Nov 1, 2016 at 12:43 PM, Kush B. <kushag...@gmail.com> wrote:
Hi,

I am trying to run Juicer script but during alignment step it makes sam file upto some level and then individual job exits. It looks like this-

------------------------------------------------------------
Sender: LSF System <lsfa...@ottavino000-215.orchestra>

Kush B.

unread,
Nov 2, 2016, 5:28:05 PM11/2/16
to 3D Genomics, kushag...@gmail.com
Hi,

I could solve the issue with running the script with short read option (-r). But script again exited at certain point not sure which step it is. The end of lsf file looks like this-


Clean2: No such queue

------------------------------------------------------------
Sender: LSF System <lsfa...@clarinet002-064.orchestra>
Subject: Job 4546040: <a1478006353_fragmerge1> Exited

Job <a1478006353_fragmerge1> was submitted from host <clarinet002-063.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-064.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_sh> was used as the working directory.
Started at Tue Nov  1 16:54:27 2016
Results reported at Tue Nov  1 16:54:27 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
    #!/bin/bash
    #BSUB -q priority
    #BSUB -W 1200
    #BSUB -o /groups/cbdm-db/kb124/FC_02524_sh/lsf.out
    #BSUB -w " done(a1478006353_chimeric*) " 
    #BSUB -J "a1478006353_fragmerge1"
    bkill -q Clean2 0

------------------------------------------------------------

Exited with exit code 255.

Resource usage summary:

    CPU time   :      0.11 sec.

The output (if any) is above this job summary.

(-: Finished sorting all sorted files into a single merge.

------------------------------------------------------------
Sender: LSF System <lsfa...@clarinet002-068.orchestra>
Subject: Job 4546041: <a1478006353_fragmerge> Done

Job <a1478006353_fragmerge> was submitted from host <clarinet002-063.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-068.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_sh> was used as the working directory.
Started at Tue Nov  1 16:54:27 2016
Results reported at Tue Nov  1 16:58:38 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
    #!/bin/bash
    #BSUB -q priority
    #BSUB -W 3600
    #BSUB -o /groups/cbdm-db/kb124/FC_02524_sh/lsf.out
    #BSUB -w " done(a1478006353_chimeric*) "
    #BSUB -J "a1478006353_fragmerge"
    export LC_ALL=C
    if [ -d /groups/cbdm-db/kb124/FC_02524_sh/done_splits ]
    then
       mv /groups/cbdm-db/kb124/FC_02524_sh/done_splits/* /groups/cbdm-db/kb124/FC_02524_sh/splits/.
    fi
    if ! sort -T /groups/cbdm-db/kb124/FC_02524_sh/HIC_tmp -m -k2,2d -k6,6d -k4,4n -k8,8n -k1,1n -k5,5n -k3,3n /groups/cbdm-db/kb124/FC_02524_sh/splits/*.sort.txt  > /groups/cbdm-db/kb124/FC_02524_sh/aligned/merged_sort.txt
    then 
echo "***! Some problems occurred somewhere in creating  sorted align files."
    else
echo "(-: Finished sorting all sorted files into a single merge."
        rm -r /groups/cbdm-db/kb124/FC_02524_sh/HIC_tmp
    fi

------------------------------------------------------------

Successfully completed.

Resource usage summary:

    CPU time   :    202.59 sec.
    Max Memory :        14 MB
    Max Swap   :       249 MB

    Max Processes  :         4
    Max Threads    :         5

The output (if any) is above this job summary.


KB



On Tuesday, November 1, 2016 at 12:57:44 PM UTC-4, Neva Durand wrote:
Ah, never mind what I said about short read alignment.

It looks like you're calling bwa mem with 16 threads; you should add that to the #BSUB lines.
On Tue, Nov 1, 2016 at 5:56 PM, Neva Durand <ne...@broadinstitute.org> wrote:
It looks like it went over the time limit for that queue.  What split size did you use?  I'm surprised that the BWA job is using so much memory and so much time.  It also looks like you're doing short read alignment, is that correct?  BWA can take a number of threads via the "-t" flag.  To ask for multiple cores, you would add a BSUB header line with "-n" <# of cores>

Best
Neva
On Tue, Nov 1, 2016 at 12:43 PM, Kush B. <kushag...@gmail.com> wrote:
Hi,

I am trying to run Juicer script but during alignment step it makes sam file upto some level and then individual job exits. It looks like this-

------------------------------------------------------------
Sender: LSF System <lsfadmin@ottavino000-215.orchestra>
To unsubscribe from this group and stop receiving emails from it, send an email to 3d-genomics...@googlegroups.com.
--
Neva Cherniavsky Durand, Ph.D.
Staff Scientist, Aiden Lab

Neva Durand

unread,
Nov 3, 2016, 2:23:36 AM11/3/16
to Kush B., 3D Genomics
You should not use the short read flag if your reads are not short.

This looks like an LSF conversion bug where the queue "Clean2" (from AWS) was not removed.

Otherwise, you have successfully merged the files.  You could restart juicer from the dedup stage by running juicer.sh (whatever flags) -S dedup

On Wed, Nov 2, 2016 at 10:28 PM, Kush B. <kushag...@gmail.com> wrote:
Hi,

I could solve the issue with running the script with short read option (-r). But script again exited at certain point not sure which step it is. The end of lsf file looks like this-


Clean2: No such queue

------------------------------------------------------------
Sender: LSF System <lsfadmin@clarinet002-064.orchestra>
Subject: Job 4546040: <a1478006353_fragmerge1> Exited

Job <a1478006353_fragmerge1> was submitted from host <clarinet002-063.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-064.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_sh> was used as the working directory.
Started at Tue Nov  1 16:54:27 2016
Results reported at Tue Nov  1 16:54:27 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
    #!/bin/bash
    #BSUB -q priority
    #BSUB -W 1200
    #BSUB -o /groups/cbdm-db/kb124/FC_02524_sh/lsf.out
    #BSUB -w " done(a1478006353_chimeric*) " 
    #BSUB -J "a1478006353_fragmerge1"
    bkill -q Clean2 0

------------------------------------------------------------

Exited with exit code 255.

Resource usage summary:

    CPU time   :      0.11 sec.

The output (if any) is above this job summary.

(-: Finished sorting all sorted files into a single merge.

------------------------------------------------------------
Sender: LSF System <lsfadmin@clarinet002-068.orchestra>


KB

------------------------------------------------------------
Sender: LSF System <lsfa...@ottavino000-215.orchestra>
To unsubscribe from this group and stop receiving emails from it, send an email to 3d-genomics+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/3d-genomics/92730478-7046-4fbb-a2f1-fef5766cfc81%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Kush B.

unread,
Nov 3, 2016, 3:07:20 AM11/3/16
to 3D Genomics, kushag...@gmail.com
Thank you for the suggestion.

I have read 1 43bp and read2 42 bp, is that considered short read?

KB
------------------------------------------------------------
Sender: LSF System <lsfadmin@ottavino000-215.orchestra>

Neva Durand

unread,
Nov 3, 2016, 3:35:12 AM11/3/16
to Kush B., 3D Genomics
Yes, in that case you should use the short read aligner.

Best
Neva

On Thu, Nov 3, 2016 at 8:07 AM, Kush B. <kushag...@gmail.com> wrote:
Thank you for the suggestion.

I have read 1 43bp and read2 42 bp, is that considered short read?

KB

On Thursday, November 3, 2016 at 2:23:36 AM UTC-4, Neva Durand wrote:
You should not use the short read flag if your reads are not short.

This looks like an LSF conversion bug where the queue "Clean2" (from AWS) was not removed.

Otherwise, you have successfully merged the files.  You could restart juicer from the dedup stage by running juicer.sh (whatever flags) -S dedup
On Wed, Nov 2, 2016 at 10:28 PM, Kush B. <kushag...@gmail.com> wrote:
Hi,

I could solve the issue with running the script with short read option (-r). But script again exited at certain point not sure which step it is. The end of lsf file looks like this-


Clean2: No such queue

------------------------------------------------------------
Sender: LSF System <lsfa...@clarinet002-064.orchestra>
Subject: Job 4546040: <a1478006353_fragmerge1> Exited

Job <a1478006353_fragmerge1> was submitted from host <clarinet002-063.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-064.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_sh> was used as the working directory.
Started at Tue Nov  1 16:54:27 2016
Results reported at Tue Nov  1 16:54:27 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
    #!/bin/bash
    #BSUB -q priority
    #BSUB -W 1200
    #BSUB -o /groups/cbdm-db/kb124/FC_02524_sh/lsf.out
    #BSUB -w " done(a1478006353_chimeric*) " 
    #BSUB -J "a1478006353_fragmerge1"
    bkill -q Clean2 0

------------------------------------------------------------

Exited with exit code 255.

Resource usage summary:

    CPU time   :      0.11 sec.

The output (if any) is above this job summary.

(-: Finished sorting all sorted files into a single merge.

------------------------------------------------------------
Sender: LSF System <lsfa...@clarinet002-068.orchestra>


KB

------------------------------------------------------------
Sender: LSF System <lsfa...@ottavino000-215.orchestra>
To unsubscribe from this group and stop receiving emails from it, send an email to 3d-genomics+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/3d-genomics/25b50289-4bf7-4329-908d-d675f2a97a60%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Kush B.

unread,
Nov 3, 2016, 3:45:45 AM11/3/16
to 3D Genomics, kushag...@gmail.com
Hi,

Sorry to bother you again, but there is again some issue. Not sure what exactly it means. I am copying whole lsf output.

Thu Nov  3 03:08:34 EDT 2016
Juicer version:1.5
/groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/juicer.sh -g mm10 -d /groups/cbdm-db/kb124/FC_02524_test -q priority -p /groups/cbdm_lab/kb124/bedtools/mm10.chrom.sizes -y /groups/cbdm-db/kb124/juicer-master/misc/mm10_MboI.txt -z /groups/shared_databases/igenome/Mus_musculus/UCSC/mm10/Sequence/BWAIndex/genome.fa -D /groups/cbdm-db/kb124/juicer-master_test/AWS -r -S dedup

------------------------------------------------------------
Sender: LSF System <lsfa...@clarinet002-061.orchestra>
Subject: Job 4646201: <a1478156907_cmd> Done

Job <a1478156907_cmd> was submitted from host <clarinet002-068.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-061.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:08:33 2016
Results reported at Thu Nov  3 03:08:34 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
    date
    echo "Juicer version:1.5" 
    echo "/groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/juicer.sh -g mm10 -d /groups/cbdm-db/kb124/FC_02524_test -q priority -p /groups/cbdm_lab/kb124/bedtools/mm10.chrom.sizes -y /groups/cbdm-db/kb124/juicer-master/misc/mm10_MboI.txt -z /groups/shared_databases/igenome/Mus_musculus/UCSC/mm10/Sequence/BWAIndex/genome.fa -D /groups/cbdm-db/kb124/juicer-master_test/AWS -r -S dedup"

------------------------------------------------------------

Successfully completed.

Resource usage summary:

    CPU time   :      0.10 sec.

The output (if any) is above this job summary.

Job <a1478156907_clean1> is not found
Job <4646208> is submitted to queue <priority>.
Job <4646209> is submitted to queue <priority>.
Job <4646210> is submitted to queue <priority>.
Job <4646211> is submitted to queue <priority>.

------------------------------------------------------------
Sender: LSF System <lsfa...@clarinet002-066.orchestra>
Subject: Job 4646202: <a1478156907_osplit> Done

Job <a1478156907_osplit> was submitted from host <clarinet002-068.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-066.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:08:40 2016
Results reported at Thu Nov  3 03:08:50 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
    #!/bin/bash
    #BSUB -q priority
    #BSUB -W 1200
    #BSUB -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out
    
    #BSUB -J "a1478156907_osplit"
    bkill -J a1478156907_clean1
    awk -v queue=priority -v outfile=/groups/cbdm-db/kb124/FC_02524_test/lsf.out -v juicedir=/groups/cbdm-db/kb124/juicer-master_test/AWS  -v dir=/groups/cbdm-db/kb124/FC_02524_test/aligned -v queuetime=3600 -v groupname=a1478156907 -f /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/split_rmdups.awk /groups/cbdm-db/kb124/FC_02524_test/aligned/merged_sort.txt

------------------------------------------------------------

Successfully completed.

Resource usage summary:

    CPU time   :      4.87 sec.
    Max Memory :         4 MB
    Max Swap   :       335 MB

    Max Processes  :         4
    Max Threads    :         5

The output (if any) is above this job summary.

(-: Alignment and merge done, launching other jobs.
Job <4646203> is being terminated
Job <4646212> is submitted to queue <priority>.
Job <4646213> is submitted to queue <priority>.
Job <4646214> is submitted to queue <priority>.

------------------------------------------------------------
Sender: LSF System <lsfa...@clarinet002-062.orchestra>
Subject: Job 4646204: <a1478156907_launch> Done

Job <a1478156907_launch> was submitted from host <clarinet002-068.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-062.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:08:54 2016
Results reported at Thu Nov  3 03:08:57 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
        #!/bin/bash
        #BSUB -q priority
        #BSUB -W 1200
        #BSUB -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out
        #BSUB -w " done(a1478156907_osplit) "
        #BSUB -J "a1478156907_launch"
        echo "(-: Alignment and merge done, launching other jobs."
        bkill -J a1478156907_clean2 
        bsub -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out -q priority -W 3600 -R "rusage[mem=32000]" -w "done(a1478156907_rmsplit) && done(a1478156907_osplit)" -J "a1478156907_stats" "df -h;_JAVA_OPTIONS=-Xmx16384m; export LC_ALL=en_US.UTF-8; echo -e 'Experiment description: ' > /groups/cbdm-db/kb124/FC_02524_test/aligned/inter.txt; /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/statistics.pl -s /groups/cbdm-db/kb124/juicer-master/misc/mm10_MboI.txt -l GATCGATC -o /groups/cbdm-db/kb124/FC_02524_test/aligned/stats_dups.txt /groups/cbdm-db/kb124/FC_02524_test/aligned/dups.txt; cat /groups/cbdm-db/kb124/FC_02524_test/splits/*.res.txt | awk -f /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/stats_sub.awk >> /groups/cbdm-db/kb124/FC_02524_test/aligned/inter.txt; java -cp /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/ LibraryComplexity /groups/cbdm-db/kb124/FC_02524_test/aligned inter.txt >> /groups/cbdm-db/kb124/FC_02524_test/aligned/inter.txt; /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/statistics.pl -s /groups/cbdm-db/kb124/juicer-master/misc/mm10_MboI.txt -l GATCGATC -o /groups/cbdm-db/kb124/FC_02524_test/aligned/inter.txt -q 1 /groups/cbdm-db/kb124/FC_02524_test/aligned/merged_nodups.txt; cat /groups/cbdm-db/kb124/FC_02524_test/splits/*_abnorm.sam > /groups/cbdm-db/kb124/FC_02524_test/aligned/abnormal.sam; cat /groups/cbdm-db/kb124/FC_02524_test/splits/*_unmapped.sam > /groups/cbdm-db/kb124/FC_02524_test/aligned/unmapped.sam; awk -f /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/collisions.awk /groups/cbdm-db/kb124/FC_02524_test/aligned/abnormal.sam > /groups/cbdm-db/kb124/FC_02524_test/aligned/collisions.txt"
        bsub -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out -q priority -W 3600 -R "rusage[mem=32000]" -w "done(a1478156907_stats)" -J "a1478156907_hic" "df -h;export _JAVA_OPTIONS=-Xmx16384m; if [ -n \"\" ]; then /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/juicebox pre -s /groups/cbdm-db/kb124/FC_02524_test/aligned/inter.txt -g /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_hists.m -q 1 /groups/cbdm-db/kb124/FC_02524_test/aligned/merged_nodups.txt /groups/cbdm-db/kb124/FC_02524_test/aligned/inter.hic /groups/cbdm_lab/kb124/bedtools/mm10.chrom.sizes; else /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/juicebox pre -f /groups/cbdm-db/kb124/juicer-master/misc/mm10_MboI.txt -s /groups/cbdm-db/kb124/FC_02524_test/aligned/inter.txt -g /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_hists.m -q 1 /groups/cbdm-db/kb124/FC_02524_test/aligned/merged_nodups.txt /groups/cbdm-db/kb124/FC_02524_test/aligned/inter.hic /groups/cbdm_lab/kb124/bedtools/mm10.chrom.sizes; fi ;"
        bsub -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out -q priority -W 3600 -R "rusage[mem=32000]" -w "done(a1478156907_rmsplit) && done(a1478156907_osplit)" -J "a1478156907_hic30" "df -h;export _JAVA_OPTIONS=-Xmx16384m; export LC_ALL=en_US.UTF-8; echo -e 'Experiment description: ' > /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_30.txt; cat /groups/cbdm-db/kb124/FC_02524_test/splits/*.res.txt | awk -f /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/stats_sub.awk >> /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_30.txt; java -cp /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/ LibraryComplexity /groups/cbdm-db/kb124/FC_02524_test/aligned inter_30.txt >> /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_30.txt; /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/statistics.pl -s /groups/cbdm-db/kb124/juicer-master/misc/mm10_MboI.txt -l GATCGATC -o /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_30.txt -q 30 /groups/cbdm-db/kb124/FC_02524_test/aligned/merged_nodups.txt; export _JAVA_OPTIONS=-Xmx8192m; if [ -n \"\" ]; then /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/juicebox pre -s /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_30.txt -g /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_30_hists.m -q 30 /groups/cbdm-db/kb124/FC_02524_test/aligned/merged_nodups.txt /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_30.hic /groups/cbdm_lab/kb124/bedtools/mm10.chrom.sizes; else /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/juicebox pre -f /groups/cbdm-db/kb124/juicer-master/misc/mm10_MboI.txt -s /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_30.txt -g /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_30_hists.m -q 30 /groups/cbdm-db/kb124/FC_02524_test/aligned/merged_nodups.txt /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_30.hic /groups/cbdm_lab/kb124/bedtools/mm10.chrom.sizes; fi"

------------------------------------------------------------

Successfully completed.

Resource usage summary:

    CPU time   :      0.53 sec.

The output (if any) is above this job summary.

Job <4646215> is submitted to queue <priority>.

------------------------------------------------------------
Sender: LSF System <lsfa...@clarinet002-064.orchestra>
Subject: Job 4646205: <a1478156907_postproc_wrap> Done

Job <a1478156907_postproc_wrap> was submitted from host <clarinet002-068.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-064.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:09:01 2016
Results reported at Thu Nov  3 03:09:02 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
    #!/bin/bash
    #BSUB -q priority
    #BSUB -W 1200
    #BSUB -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out
    #BSUB -w " done(a1478156907_launch) "
    #BSUB -J "a1478156907_postproc_wrap"
    bsub -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out -q priority -W 3600 -R "rusage[mem=32000]" -w "done(a1478156907_hic30)" -J "a1478156907_postproc" "/groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/juicer_postprocessing.sh -j /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/juicebox -i /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_30.hic -m /groups/cbdm-db/kb124/juicer-master_test/AWS/references/motif -g mm10"

------------------------------------------------------------

Successfully completed.

Resource usage summary:

    CPU time   :      0.33 sec.

The output (if any) is above this job summary.

Job <4646217> is submitted to queue <priority>.

------------------------------------------------------------
Sender: LSF System <lsfa...@clarinet002-068.orchestra>
Subject: Job 4646206: <a1478156907_prep_done> Done

Job <a1478156907_prep_done> was submitted from host <clarinet002-068.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-068.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:09:08 2016
Results reported at Thu Nov  3 03:09:09 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
    #!/bin/bash
    #BSUB -q priority
    #BSUB -W 1200
    #BSUB -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out
    #BSUB -J "a1478156907_prep_done"
    #BSUB -w " done(a1478156907_postproc_wrap) "
    bsub -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out -q priority -W 1200 -w "done(a1478156907_stats) && done(a1478156907_postproc) && done(a1478156907_hic30)  && done(a1478156907_hic)" -J "a1478156907_done" "bkill -J a1478156907_clean3; export splitdir=/groups/cbdm-db/kb124/FC_02524_test/splits; export outputdir=/groups/cbdm-db/kb124/FC_02524_test/aligned; /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/check.sh;"

------------------------------------------------------------

Successfully completed.

Resource usage summary:

    CPU time   :      0.21 sec.

The output (if any) is above this job summary.

/tmp/.lsbtmp90016/.lsbatch/1478156913.4646207.shell: line 6: unexpected EOF while looking for matching `"'
/tmp/.lsbtmp90016/.lsbatch/1478156913.4646207.shell: line 8: syntax error: unexpected end of file

------------------------------------------------------------
Sender: LSF System <lsfa...@clarinet002-061.orchestra>
Subject: Job 4646207: <    #!/bin/bash;    #BSUB -q priority;    #BSUB -W 1200;    #BSUB -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out;    #BSUB -w " done(a1478156907_launch) ";    bsub -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out -q priority -W 1200  -w "exit(a1478156907_postproc) || exit(a1478156907_stats) || exit(a1478156907_hic) || exit(a1478156907_hic30)" -J "a1478156907_clean3" "bkill -q -J "a1478156907_prep_done 0; bkill -q priority 0;"> Exited

Job <    #!/bin/bash;    #BSUB -q priority;    #BSUB -W 1200;    #BSUB -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out;    #BSUB -w " done(a1478156907_launch) ";    bsub -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out -q priority -W 1200  -w "exit(a1478156907_postproc) || exit(a1478156907_stats) || exit(a1478156907_hic) || exit(a1478156907_hic30)" -J "a1478156907_clean3" "bkill -q -J "a1478156907_prep_done 0; bkill -q priority 0;"> was submitted from host <clarinet002-068.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-061.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:09:14 2016
Results reported at Thu Nov  3 03:09:15 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
    #!/bin/bash
    #BSUB -q priority
    #BSUB -W 1200
    #BSUB -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out
    #BSUB -w " done(a1478156907_launch) "
    bsub -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out -q priority -W 1200  -w "exit(a1478156907_postproc) || exit(a1478156907_stats) || exit(a1478156907_hic) || exit(a1478156907_hic30)" -J "a1478156907_clean3" "bkill -q -J "a1478156907_prep_done 0; bkill -q priority 0;"

------------------------------------------------------------

Exited with exit code 2.

Resource usage summary:

    CPU time   :      0.09 sec.

The output (if any) is above this job summary.


------------------------------------------------------------
Sender: LSF System <lsfa...@clarinet002-063.orchestra>
Subject: Job 4646208: <a1478156907_msplit0000_> Done

Job <a1478156907_msplit0000_> was submitted from host <clarinet002-066.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-063.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:08:54 2016
Results reported at Thu Nov  3 03:09:17 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
awk -f  /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/dups.awk -v name=/groups/cbdm-db/kb124/FC_02524_test/aligned/a1478156907_msplit0000_ /groups/cbdm-db/kb124/FC_02524_test/aligned/split0000;

------------------------------------------------------------

Successfully completed.

Resource usage summary:

    CPU time   :     22.46 sec.
    Max Memory :         6 MB
    Max Swap   :       338 MB

    Max Processes  :         4
    Max Threads    :         5

The output (if any) is above this job summary.


------------------------------------------------------------
Sender: LSF System <lsfa...@clarinet002-066.orchestra>
Subject: Job 4646209: <a1478156907_msplit0001_> Done

Job <a1478156907_msplit0001_> was submitted from host <clarinet002-066.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-066.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:09:21 2016
Results reported at Thu Nov  3 03:09:25 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
awk -f /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/dups.awk -v name=/groups/cbdm-db/kb124/FC_02524_test/aligned/a1478156907_msplit0001_ /groups/cbdm-db/kb124/FC_02524_test/aligned/split0001;

------------------------------------------------------------

Successfully completed.

Resource usage summary:

    CPU time   :      2.68 sec.
    Max Memory :         4 MB
    Max Swap   :       336 MB

    Max Processes  :         4
    Max Threads    :         5

The output (if any) is above this job summary.


------------------------------------------------------------
Sender: LSF System <lsfa...@clarinet002-067.orchestra>
Subject: Job 4646210: <a1478156907_catsplit> Done

Job <a1478156907_catsplit> was submitted from host <clarinet002-066.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-067.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:09:28 2016
Results reported at Thu Nov  3 03:09:31 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
cat /groups/cbdm-db/kb124/FC_02524_test/aligned/a1478156907_msplit*_optdups.txt > /groups/cbdm-db/kb124/FC_02524_test/aligned/opt_dups.txt;  cat /groups/cbdm-db/kb124/FC_02524_test/aligned/a1478156907_msplit*_dups.txt > /groups/cbdm-db/kb124/FC_02524_test/aligned/dups.txt;cat /groups/cbdm-db/kb124/FC_02524_test/aligned/a1478156907_msplit*_merged_nodups.txt > /groups/cbdm-db/kb124/FC_02524_test/aligned/merged_nodups.txt; 

------------------------------------------------------------

Successfully completed.

Resource usage summary:

    CPU time   :      0.51 sec.
    Max Memory :         3 MB
    Max Swap   :       329 MB

    Max Processes  :         4
    Max Threads    :         5

The output (if any) is above this job summary.


------------------------------------------------------------
Sender: LSF System <lsfa...@clarinet002-062.orchestra>
Subject: Job 4646211: <a1478156907_rmsplit> Done

Job <a1478156907_rmsplit> was submitted from host <clarinet002-066.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-062.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:09:35 2016
Results reported at Thu Nov  3 03:09:36 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
 rm /groups/cbdm-db/kb124/FC_02524_test/aligned/*_msplit*_optdups.txt; rm /groups/cbdm-db/kb124/FC_02524_test/aligned/*_msplit*_dups.txt; rm /groups/cbdm-db/kb124/FC_02524_test/aligned/*_msplit*_merged_nodups.txt; rm /groups/cbdm-db/kb124/FC_02524_test/aligned/split*;

------------------------------------------------------------

Successfully completed.

Resource usage summary:

    CPU time   :      0.12 sec.

The output (if any) is above this job summary.

Filesystem            Size  Used Avail Use% Mounted on
/dev/sda3             2.0G  681M  1.2G  37% /
tmpfs                  48G   12K   48G   1% /dev/shm
/dev/sda1             256M  260K  256M   1% /boot/efi
/dev/mapper/cobbler-tmp
                      226G  543M  214G   1% /tmp
/dev/mapper/cobbler-usr
                      7.8G  1.7G  5.8G  22% /usr
/dev/mapper/cobbler-var
                      7.8G  742M  6.7G  10% /var
home.files.orchestra:/ifs/systems/Orchestra/home
                       60T   59T  1.4T  98% /home
data1.files.orchestra:/ifs/systems/Orchestra/data1
                      7.7P  5.4P  2.4P  70% /n/data1
data2.files.orchestra:/ifs/systems/Orchestra/data2
                      4.2P  3.5P  683T  84% /n/data2
groups.files.orchestra:/ifs/systems/Orchestra/groups
                      4.2P  3.5P  683T  84% /groups
home.files.orchestra:/ifs/systems/Orchestra/opt_centos
                      122T  102T   20T  85% /opt
opt.files.orchestra:/fs_Orchestra/opt/lsf
                      250G   72G  179G  29% /opt/lsf
usr.files.orchestra:/fs_Orchestra/usr/local/x86_64-linux
                      200G  120G   81G  60% /usr/local
www.files.orchestra:/fs_Orchestra/www
                       50G     0   50G   0% /www
/dev/no_backup        2.0P  639T  1.4P  32% /n/no_backup
exsc2-mds00@tcp:exsc2-mds01@tcp:/scratch2
                      853T   61T  784T   8% /n/scratch2
Picked up _JAVA_OPTIONS: -Xmx16384m
Picked up _JAVA_OPTIONS: -Xmx8192m
_test/splits/*_abnorm.sam: No such file or directory

------------------------------------------------------------
Sender: LSF System <lsfa...@clarinet002-064.orchestra>
Subject: Job 4646212: <a1478156907_stats> Done

Job <a1478156907_stats> was submitted from host <clarinet002-062.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-064.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:09:42 2016
Results reported at Thu Nov  3 03:10:18 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
df -h;_JAVA_OPTIONS=-Xmx16384m; export LC_ALL=en_US.UTF-8; echo -e 'Experiment description: ' > /groups/cbdm-db/kb124/FC_02524_test/aligned/inter.txt; /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/statistics.pl -s /groups/cbdm-db/kb124/juicer-master/misc/mm10_MboI.txt -l GATCGATC -o /groups/cbdm-db/kb124/FC_02524_test/aligned/stats_dups.txt /groups/cbdm-db/kb124/FC_02524_test/aligned/dups.txt; cat /groups/cbdm-db/kb124/FC_02524_test/splits/*.res.txt | awk -f /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/stats_sub.awk >> /groups/cbdm-db/kb124/FC_02524_test/aligned/inter.txt; java -cp /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/ LibraryComplexity /groups/cbdm-db/kb124/FC_02524_test/aligned inter.txt >> /groups/cbdm-db/kb124/FC_02524_test/aligned/inter.txt; /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/statistics.pl -s /groups/cbdm-db/kb124/juicer-master/misc/mm10_MboI.txt -l GATCGATC -o /groups/cbdm-db/kb124/FC_02524_test/aligned/inter.txt -q 1 /groups/cbdm-db/kb124/FC_02524_test/aligned/merged_nodups.txt; cat /groups/cbdm-db/kb124/FC_02524_test/splits/*_abnorm.sam > /groups/cbdm-db/kb124/FC_02524_test/aligned/abnormal.sam; cat /groups/cbdm-db/kb124/FC_02524_test/splits/*_unmapped.sam > /groups/cbdm-db/kb124/FC_02524_test/aligned/unmapped.sam; awk -f /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/collisions.awk /groups/cbdm-db/kb124/FC_02524_test/aligned/abnormal.sam > /groups/cbdm-db/kb124/FC_02524_test/aligned/collisions.txt
------------------------------------------------------------

Successfully completed.

Resource usage summary:

    CPU time   :     33.57 sec.
    Max Memory :       584 MB
    Max Swap   :       833 MB

    Max Processes  :         3
    Max Threads    :         4

The output (if any) is above this job summary.

Filesystem            Size  Used Avail Use% Mounted on
/dev/sda3             2.0G  681M  1.2G  37% /
tmpfs                  48G  4.0K   48G   1% /dev/shm
/dev/sda1             256M  260K  256M   1% /boot/efi
/dev/mapper/cobbler-tmp
                      226G  516M  214G   1% /tmp
/dev/mapper/cobbler-usr
                      7.8G  1.7G  5.8G  23% /usr
/dev/mapper/cobbler-var
                      7.8G  732M  6.7G  10% /var
home.files.orchestra:/ifs/systems/Orchestra/home
                       60T   59T  1.4T  98% /home
data1.files.orchestra:/ifs/systems/Orchestra/data1
                      7.7P  5.4P  2.4P  70% /n/data1
data2.files.orchestra:/ifs/systems/Orchestra/data2
                      4.2P  3.5P  683T  84% /n/data2
groups.files.orchestra:/ifs/systems/Orchestra/groups
                      4.2P  3.5P  683T  84% /groups
home.files.orchestra:/ifs/systems/Orchestra/opt_centos
                      122T  102T   20T  85% /opt
opt.files.orchestra:/fs_Orchestra/opt/lsf
                      250G   72G  179G  29% /opt/lsf
usr.files.orchestra:/fs_Orchestra/usr/local/x86_64-linux
                      200G  120G   81G  60% /usr/local
www.files.orchestra:/fs_Orchestra/www
                       50G     0   50G   0% /www
/dev/no_backup        2.0P  639T  1.4P  32% /n/no_backup
exsc2-mds00@tcp:exsc2-mds01@tcp:/scratch2
                      853T   61T  784T   8% /n/scratch2
Picked up _JAVA_OPTIONS: -Xmx16384m
Nov 03, 2016 3:10:26 AM java.util.prefs.FileSystemPreferences$1 run
WARNING: Couldn't create user preferences directory. User preferences are unusable.
Nov 03, 2016 3:10:26 AM java.util.prefs.FileSystemPreferences$1 run
WARNING: java.io.IOException: No such file or directory
Failed to create user directory!
Exception in thread "main" java.lang.ExceptionInInitializerError
at juicebox.tools.utils.original.Preprocessor.preprocess(Preprocessor.java:186)
at juicebox.tools.clt.old.PreProcessing.run(PreProcessing.java:98)
at juicebox.tools.HiCTools.main(HiCTools.java:77)
Caused by: java.lang.NullPointerException
at juicebox.DirectoryManager.getHiCDirectory(DirectoryManager.java:113)
at juicebox.HiCGlobals.<clinit>(HiCGlobals.java:52)
... 3 more
Nov 03, 2016 3:10:28 AM java.util.prefs.FileSystemPreferences checkLockFile0ErrorCode
WARNING: Could not lock User prefs.  Unix error code 2.
Nov 03, 2016 3:10:28 AM java.util.prefs.FileSystemPreferences syncWorld
WARNING: Couldn't flush user prefs: java.util.prefs.BackingStoreException: Couldn't get file lock.

------------------------------------------------------------
Sender: LSF System <lsfa...@clarinet002-062.orchestra>
Subject: Job 4646213: <a1478156907_hic> Exited

Job <a1478156907_hic> was submitted from host <clarinet002-062.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-062.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:10:23 2016
Results reported at Thu Nov  3 03:10:28 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
df -h;export _JAVA_OPTIONS=-Xmx16384m; if [ -n "" ]; then /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/juicebox pre -s /groups/cbdm-db/kb124/FC_02524_test/aligned/inter.txt -g /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_hists.m -q 1 /groups/cbdm-db/kb124/FC_02524_test/aligned/merged_nodups.txt /groups/cbdm-db/kb124/FC_02524_test/aligned/inter.hic /groups/cbdm_lab/kb124/bedtools/mm10.chrom.sizes; else /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/juicebox pre -f /groups/cbdm-db/kb124/juicer-master/misc/mm10_MboI.txt -s /groups/cbdm-db/kb124/FC_02524_test/aligned/inter.txt -g /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_hists.m -q 1 /groups/cbdm-db/kb124/FC_02524_test/aligned/merged_nodups.txt /groups/cbdm-db/kb124/FC_02524_test/aligned/inter.hic /groups/cbdm_lab/kb124/bedtools/mm10.chrom.sizes; fi ;
------------------------------------------------------------

Exited with exit code 1.

Resource usage summary:

    CPU time   :      2.78 sec.

The output (if any) is above this job summary.


------------------------------------------------------------
Sender: LSF System <lsfa...@clarinet002-065.orchestra>
Subject: Job 4646214: <a1478156907_hic30> Exited

Job <a1478156907_hic30> was submitted from host <clarinet002-062.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-065.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:09:42 2016
Results reported at Thu Nov  3 03:10:28 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
df -h;export _JAVA_OPTIONS=-Xmx16384m; export LC_ALL=en_US.UTF-8; echo -e 'Experiment description: ' > /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_30.txt; cat /groups/cbdm-db/kb124/FC_02524_test/splits/*.res.txt | awk -f /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/stats_sub.awk >> /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_30.txt; java -cp /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/ LibraryComplexity /groups/cbdm-db/kb124/FC_02524_test/aligned inter_30.txt >> /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_30.txt; /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/statistics.pl -s /groups/cbdm-db/kb124/juicer-master/misc/mm10_MboI.txt -l GATCGATC -o /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_30.txt -q 30 /groups/cbdm-db/kb124/FC_02524_test/aligned/merged_nodups.txt; export _JAVA_OPTIONS=-Xmx8192m; if [ -n "" ]; then /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/juicebox pre -s /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_30.txt -g /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_30_hists.m -q 30 /groups/cbdm-db/kb124/FC_02524_test/aligned/merged_nodups.txt /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_30.hic /groups/cbdm_lab/kb124/bedtools/mm10.chrom.sizes; else /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/juicebox pre -f /groups/cbdm-db/kb124/juicer-master/misc/mm10_MboI.txt -s /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_30.txt -g /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_30_hists.m -q 30 /groups/cbdm-db/kb124/FC_02524_test/aligned/merged_nodups.txt /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_30.hic /groups/cbdm_lab/kb124/bedtools/mm10.chrom.sizes; fi
------------------------------------------------------------

Exited with exit code 1.

Resource usage summary:

    CPU time   :     42.51 sec.
    Max Memory :       574 MB
    Max Swap   :       823 MB

    Max Processes  :         4
    Max Threads    :         4

The output (if any) is above this job summary.

Thank you!

KB

On Thursday, November 3, 2016 at 3:35:12 AM UTC-4, Neva Durand wrote:
Yes, in that case you should use the short read aligner.

Best
Neva
On Thu, Nov 3, 2016 at 8:07 AM, Kush B. <kushag...@gmail.com> wrote:
Thank you for the suggestion.

I have read 1 43bp and read2 42 bp, is that considered short read?

KB

On Thursday, November 3, 2016 at 2:23:36 AM UTC-4, Neva Durand wrote:
You should not use the short read flag if your reads are not short.

This looks like an LSF conversion bug where the queue "Clean2" (from AWS) was not removed.

Otherwise, you have successfully merged the files.  You could restart juicer from the dedup stage by running juicer.sh (whatever flags) -S dedup
On Wed, Nov 2, 2016 at 10:28 PM, Kush B. <kushag...@gmail.com> wrote:
Hi,

I could solve the issue with running the script with short read option (-r). But script again exited at certain point not sure which step it is. The end of lsf file looks like this-


Clean2: No such queue

------------------------------------------------------------
Sender: LSF System <lsfadmin@clarinet002-064.orchestra>
Subject: Job 4546040: <a1478006353_fragmerge1> Exited

Job <a1478006353_fragmerge1> was submitted from host <clarinet002-063.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-064.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_sh> was used as the working directory.
Started at Tue Nov  1 16:54:27 2016
Results reported at Tue Nov  1 16:54:27 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
    #!/bin/bash
    #BSUB -q priority
    #BSUB -W 1200
    #BSUB -o /groups/cbdm-db/kb124/FC_02524_sh/lsf.out
    #BSUB -w " done(a1478006353_chimeric*) " 
    #BSUB -J "a1478006353_fragmerge1"
    bkill -q Clean2 0

------------------------------------------------------------

Exited with exit code 255.

Resource usage summary:

    CPU time   :      0.11 sec.

The output (if any) is above this job summary.

(-: Finished sorting all sorted files into a single merge.

------------------------------------------------------------
Sender: LSF System <lsfadmin@clarinet002-068.orchestra>


KB

------------------------------------------------------------
Sender: LSF System <lsfadmin@ottavino000-215.orchestra>

Neva Durand

unread,
Nov 3, 2016, 4:04:43 AM11/3/16
to Kush B., 3D Genomics
Things mostly look fine. Please look in the aligned directory and see if there are files merged_sort and merged_nodups and dups. Also have a look at the inter.txt file and see if the statistics make sense. 

The one problem I see is that the hic files failed to create due to the user prefs problem. This is probably due to your Java setup or due to running in a weird directory. 

Once you've confirmed that the merged_nodups / statistics is correct, you can run the juicebox_tools jar directly on your files and send in the user home via a flag to Java. -D user.home


Best
Neva


On Thursday, November 3, 2016, Kush B. <kushag...@gmail.com> wrote:
Hi,

Sorry to bother you again, but there is again some issue. Not sure what exactly it means. I am copying whole lsf output.

Thu Nov  3 03:08:34 EDT 2016
Juicer version:1.5
/groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/juicer.sh -g mm10 -d /groups/cbdm-db/kb124/FC_02524_test -q priority -p /groups/cbdm_lab/kb124/bedtools/mm10.chrom.sizes -y /groups/cbdm-db/kb124/juicer-master/misc/mm10_MboI.txt -z /groups/shared_databases/igenome/Mus_musculus/UCSC/mm10/Sequence/BWAIndex/genome.fa -D /groups/cbdm-db/kb124/juicer-master_test/AWS -r -S dedup

------------------------------------------------------------
Sender: LSF System <lsfadmin@clarinet002-061.orchestra>
Subject: Job 4646201: <a1478156907_cmd> Done

Job <a1478156907_cmd> was submitted from host <clarinet002-068.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-061.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:08:33 2016
Results reported at Thu Nov  3 03:08:34 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
    date
    echo "Juicer version:1.5" 
    echo "/groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/juicer.sh -g mm10 -d /groups/cbdm-db/kb124/FC_02524_test -q priority -p /groups/cbdm_lab/kb124/bedtools/mm10.chrom.sizes -y /groups/cbdm-db/kb124/juicer-master/misc/mm10_MboI.txt -z /groups/shared_databases/igenome/Mus_musculus/UCSC/mm10/Sequence/BWAIndex/genome.fa -D /groups/cbdm-db/kb124/juicer-master_test/AWS -r -S dedup"

------------------------------------------------------------

Successfully completed.

Resource usage summary:

    CPU time   :      0.10 sec.

The output (if any) is above this job summary.

Job <a1478156907_clean1> is not found
Job <4646208> is submitted to queue <priority>.
Job <4646209> is submitted to queue <priority>.
Job <4646210> is submitted to queue <priority>.
Job <4646211> is submitted to queue <priority>.

------------------------------------------------------------
Sender: LSF System <lsfadmin@clarinet002-066.orchestra>
Sender: LSF System <lsfadmin@clarinet002-062.orchestra>
Sender: LSF System <lsfadmin@clarinet002-064.orchestra>
Subject: Job 4646205: <a1478156907_postproc_wrap> Done

Job <a1478156907_postproc_wrap> was submitted from host <clarinet002-068.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-064.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:09:01 2016
Results reported at Thu Nov  3 03:09:02 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
    #!/bin/bash
    #BSUB -q priority
    #BSUB -W 1200
    #BSUB -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out
    #BSUB -w " done(a1478156907_launch) "
    #BSUB -J "a1478156907_postproc_wrap"
    bsub -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out -q priority -W 3600 -R "rusage[mem=32000]" -w "done(a1478156907_hic30)" -J "a1478156907_postproc" "/groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/juicer_postprocessing.sh -j /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/juicebox -i /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_30.hic -m /groups/cbdm-db/kb124/juicer-master_test/AWS/references/motif -g mm10"

------------------------------------------------------------

Successfully completed.

Resource usage summary:

    CPU time   :      0.33 sec.

The output (if any) is above this job summary.

Job <4646217> is submitted to queue <priority>.

------------------------------------------------------------
Sender: LSF System <lsfadmin@clarinet002-068.orchestra>
Sender: LSF System <lsfadmin@clarinet002-061.orchestra>
Subject: Job 4646207: <    #!/bin/bash;    #BSUB -q priority;    #BSUB -W 1200;    #BSUB -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out;    #BSUB -w " done(a1478156907_launch) ";    bsub -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out -q priority -W 1200  -w "exit(a1478156907_postproc) || exit(a1478156907_stats) || exit(a1478156907_hic) || exit(a1478156907_hic30)" -J "a1478156907_clean3" "bkill -q -J "a1478156907_prep_done 0; bkill -q priority 0;"> Exited

Job <    #!/bin/bash;    #BSUB -q priority;    #BSUB -W 1200;    #BSUB -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out;    #BSUB -w " done(a1478156907_launch) ";    bsub -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out -q priority -W 1200  -w "exit(a1478156907_postproc) || exit(a1478156907_stats) || exit(a1478156907_hic) || exit(a1478156907_hic30)" -J "a1478156907_clean3" "bkill -q -J "a1478156907_prep_done 0; bkill -q priority 0;"> was submitted from host <clarinet002-068.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-061.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:09:14 2016
Results reported at Thu Nov  3 03:09:15 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
    #!/bin/bash
    #BSUB -q priority
    #BSUB -W 1200
    #BSUB -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out
    #BSUB -w " done(a1478156907_launch) "
    bsub -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out -q priority -W 1200  -w "exit(a1478156907_postproc) || exit(a1478156907_stats) || exit(a1478156907_hic) || exit(a1478156907_hic30)" -J "a1478156907_clean3" "bkill -q -J "a1478156907_prep_done 0; bkill -q priority 0;"

------------------------------------------------------------

Exited with exit code 2.

Resource usage summary:

    CPU time   :      0.09 sec.

The output (if any) is above this job summary.


------------------------------------------------------------
Sender: LSF System <lsfadmin@clarinet002-063.orchestra>
Subject: Job 4646208: <a1478156907_msplit0000_> Done

Job <a1478156907_msplit0000_> was submitted from host <clarinet002-066.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-063.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:08:54 2016
Results reported at Thu Nov  3 03:09:17 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
awk -f  /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/dups.awk -v name=/groups/cbdm-db/kb124/FC_02524_test/aligned/a1478156907_msplit0000_ /groups/cbdm-db/kb124/FC_02524_test/aligned/split0000;

------------------------------------------------------------

Successfully completed.

Resource usage summary:

    CPU time   :     22.46 sec.
    Max Memory :         6 MB
    Max Swap   :       338 MB

    Max Processes  :         4
    Max Threads    :         5

The output (if any) is above this job summary.


------------------------------------------------------------
Sender: LSF System <lsfadmin@clarinet002-066.orchestra>
Subject: Job 4646209: <a1478156907_msplit0001_> Done

Job <a1478156907_msplit0001_> was submitted from host <clarinet002-066.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-066.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:09:21 2016
Results reported at Thu Nov  3 03:09:25 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
awk -f /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/dups.awk -v name=/groups/cbdm-db/kb124/FC_02524_test/aligned/a1478156907_msplit0001_ /groups/cbdm-db/kb124/FC_02524_test/aligned/split0001;

------------------------------------------------------------

Successfully completed.

Resource usage summary:

    CPU time   :      2.68 sec.
    Max Memory :         4 MB
    Max Swap   :       336 MB

    Max Processes  :         4
    Max Threads    :         5

The output (if any) is above this job summary.


------------------------------------------------------------
Sender: LSF System <lsfadmin@clarinet002-067.orchestra>
Subject: Job 4646210: <a1478156907_catsplit> Done

Job <a1478156907_catsplit> was submitted from host <clarinet002-066.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-067.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:09:28 2016
Results reported at Thu Nov  3 03:09:31 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
cat /groups/cbdm-db/kb124/FC_02524_test/aligned/a1478156907_msplit*_optdups.txt > /groups/cbdm-db/kb124/FC_02524_test/aligned/opt_dups.txt;  cat /groups/cbdm-db/kb124/FC_02524_test/aligned/a1478156907_msplit*_dups.txt > /groups/cbdm-db/kb124/FC_02524_test/aligned/dups.txt;cat /groups/cbdm-db/kb124/FC_02524_test/aligned/a1478156907_msplit*_merged_nodups.txt > /groups/cbdm-db/kb124/FC_02524_test/aligned/merged_nodups.txt; 

------------------------------------------------------------

Successfully completed.

Resource usage summary:

    CPU time   :      0.51 sec.
    Max Memory :         3 MB
    Max Swap   :       329 MB

    Max Processes  :         4
    Max Threads    :         5

The output (if any) is above this job summary.


------------------------------------------------------------
Sender: LSF System <lsfadmin@clarinet002-062.orchestra>
Sender: LSF System <lsfadmin@clarinet002-064.orchestra>
Sender: LSF System <lsfadmin@clarinet002-062.orchestra>
Subject: Job 4646213: <a1478156907_hic> Exited

Job <a1478156907_hic> was submitted from host <clarinet002-062.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-062.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:10:23 2016
Results reported at Thu Nov  3 03:10:28 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
df -h;export _JAVA_OPTIONS=-Xmx16384m; if [ -n "" ]; then /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/juicebox pre -s /groups/cbdm-db/kb124/FC_02524_test/aligned/inter.txt -g /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_hists.m -q 1 /groups/cbdm-db/kb124/FC_02524_test/aligned/merged_nodups.txt /groups/cbdm-db/kb124/FC_02524_test/aligned/inter.hic /groups/cbdm_lab/kb124/bedtools/mm10.chrom.sizes; else /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/juicebox pre -f /groups/cbdm-db/kb124/juicer-master/misc/mm10_MboI.txt -s /groups/cbdm-db/kb124/FC_02524_test/aligned/inter.txt -g /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_hists.m -q 1 /groups/cbdm-db/kb124/FC_02524_test/aligned/merged_nodups.txt /groups/cbdm-db/kb124/FC_02524_test/aligned/inter.hic /groups/cbdm_lab/kb124/bedtools/mm10.chrom.sizes; fi ;
------------------------------------------------------------

Exited with exit code 1.

Resource usage summary:

    CPU time   :      2.78 sec.

The output (if any) is above this job summary.


------------------------------------------------------------
Sender: LSF System <lsfadmin@clarinet002-065.orchestra>
Sender: LSF System <lsfa...@clarinet002-064.orchestra>
Subject: Job 4546040: <a1478006353_fragmerge1> Exited

Job <a1478006353_fragmerge1> was submitted from host <clarinet002-063.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-064.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_sh> was used as the working directory.
Started at Tue Nov  1 16:54:27 2016
Results reported at Tue Nov  1 16:54:27 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
    #!/bin/bash
    #BSUB -q priority
    #BSUB -W 1200
    #BSUB -o /groups/cbdm-db/kb124/FC_02524_sh/lsf.out
    #BSUB -w " done(a1478006353_chimeric*) " 
    #BSUB -J "a1478006353_fragmerge1"
    bkill -q Clean2 0

------------------------------------------------------------

Exited with exit code 255.

Resource usage summary:

    CPU time   :      0.11 sec.

The output (if any) is above this job summary.

(-: Finished sorting all sorted files into a single merge.

------------------------------------------------------------
Sender: LSF System <lsfa...@clarinet002-068.orchestra>


KB

------------------------------------------------------------
Sender: LSF System <lsfa...@ottavino000-215.orchestra>
To unsubscribe from this group and stop receiving emails from it, send an email to 3d-genomics+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/3d-genomics/f2aa434f-95fc-405e-8a2d-7182a7c11c2e%40googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Kush B.

unread,
Nov 3, 2016, 4:13:35 AM11/3/16
to 3D Genomics, kushag...@gmail.com
There are merged_sort, merged_nodups and dups files.

Here is the inter.txt file generated from a test run of 5 million reads, does it make sense?


Experiment description: 
Sequenced Read Pairs:  1,489,951
 Normal Paired: 1,155,173 (77.53%)
 Chimeric Paired: 0 (0.00%)
 Chimeric Ambiguous: 0 (0.00%)
 Unmapped: 334,778 (22.47%)
 Ligation Motif Present: 289,079 (19.40%)
Alignable (Normal+Chimeric Paired): 1,155,173 (77.53%)
Unique Reads: 1,138,383 (76.40%)
PCR Duplicates: 16,316 (1.10%)
Optical Duplicates: 474 (0.03%)
Library Complexity Estimate: 40,473,765
Intra-fragment Reads: 179,055 (12.02% / 15.73%)
Below MAPQ Threshold: 297,243 (19.95% / 26.11%)
Hi-C Contacts: 662,085 (44.44% / 58.16%)
 Ligation Motif Present: 14,413  (0.97% / 1.27%)
 3' Bias (Long Range): 82% - 18%
 Pair Type %(L-I-O-R): 25% - 25% - 25% - 25%
Inter-chromosomal: 211,483  (14.19% / 18.58%)
Intra-chromosomal: 450,602  (30.24% / 39.58%)
Short Range (<20Kb): 108,896  (7.31% / 9.57%)
Long Range (>20Kb): 341,706  (22.93% / 30.02%)

Thank you!

KB

Sender: LSF System <lsfadmin@clarinet002-064.orchestra>
Subject: Job 4546040: <a1478006353_fragmerge1> Exited

Job <a1478006353_fragmerge1> was submitted from host <clarinet002-063.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-064.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_sh> was used as the working directory.
Started at Tue Nov  1 16:54:27 2016
Results reported at Tue Nov  1 16:54:27 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
    #!/bin/bash
    #BSUB -q priority
    #BSUB -W 1200
    #BSUB -o /groups/cbdm-db/kb124/FC_02524_sh/lsf.out
    #BSUB -w " done(a1478006353_chimeric*) " 
    #BSUB -J "a1478006353_fragmerge1"
    bkill -q Clean2 0

------------------------------------------------------------

Exited with exit code 255.

Resource usage summary:

    CPU time   :      0.11 sec.

The output (if any) is above this job summary.

(-: Finished sorting all sorted files into a single merge.

------------------------------------------------------------
Sender: LSF System <lsfadmin@clarinet002-068.orchestra>


KB

------------------------------------------------------------
Sender: LSF System <lsfadmin@ottavino000-215.orchestra>
Subject: Job 4535172: <a1477977860_align2WT_1001.fastq> Exited

Job <a1477977860_align2WT_1001.fastq> was submitted from host <clarinet002-061.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <ottavino000-215.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_sh> was used as the working directory.
Started at Tue Nov  1 03:57:01 2016
Results reported at Tue Nov  1 05:58:05 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
#!/bin/bash
#BSUB -q priority
#BSUB -o /groups/cbdm-db/kb124/FC_02524_sh/lsf.out
        #BSUB -W 1200
#BSUB -R "rusage[mem=128000]"
#BSUB -J "a1477977860_align2WT_1001.fastq"
# Align read2
if [ -n "" ] || [ "0" -eq 2 ]
then
           echo 'Running command bwa aln -q 15 /groups/shared_databases/igenome/Mus_musculus/UCSC/mm10/Sequence/BWAIndex/genome.fa /groups/cbdm-db/kb124/FC_02524_sh/splits/WT_1_R2001.fastq > /groups/cbdm-db/kb1
...

Neva Durand

unread,
Nov 3, 2016, 4:22:32 AM11/3/16
to Kush B., 3D Genomics
The first line says 1.4M reads, not 5M, so it appears something is wrong. 

I would remove the line referring to Clean2 in your juicer script, remove the splits and aligned folders, and rerun from the beginning. 

Best 
Sender: LSF System <lsfa...@clarinet002-061.orchestra>
Subject: Job 4646201: <a1478156907_cmd> Done

Job <a1478156907_cmd> was submitted from host <clarinet002-068.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-061.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:08:33 2016
Results reported at Thu Nov  3 03:08:34 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
    date
    echo "Juicer version:1.5" 
    echo "/groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/juicer.sh -g mm10 -d /groups/cbdm-db/kb124/FC_02524_test -q priority -p /groups/cbdm_lab/kb124/bedtools/mm10.chrom.sizes -y /groups/cbdm-db/kb124/juicer-master/misc/mm10_MboI.txt -z /groups/shared_databases/igenome/Mus_musculus/UCSC/mm10/Sequence/BWAIndex/genome.fa -D /groups/cbdm-db/kb124/juicer-master_test/AWS -r -S dedup"

------------------------------------------------------------

Successfully completed.

Resource usage summary:

    CPU time   :      0.10 sec.

The output (if any) is above this job summary.

Job <a1478156907_clean1> is not found
Job <4646208> is submitted to queue <priority>.
Job <4646209> is submitted to queue <priority>.
Job <4646210> is submitted to queue <priority>.
Job <4646211> is submitted to queue <priority>.

------------------------------------------------------------
Sender: LSF System <lsfa...@clarinet002-066.orchestra>
Sender: LSF System <lsfa...@clarinet002-062.orchestra>
Sender: LSF System <lsfa...@clarinet002-064.orchestra>
Subject: Job 4646205: <a1478156907_postproc_wrap> Done

Job <a1478156907_postproc_wrap> was submitted from host <clarinet002-068.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-064.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:09:01 2016
Results reported at Thu Nov  3 03:09:02 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
    #!/bin/bash
    #BSUB -q priority
    #BSUB -W 1200
    #BSUB -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out
    #BSUB -w " done(a1478156907_launch) "
    #BSUB -J "a1478156907_postproc_wrap"
    bsub -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out -q priority -W 3600 -R "rusage[mem=32000]" -w "done(a1478156907_hic30)" -J "a1478156907_postproc" "/groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/juicer_postprocessing.sh -j /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/juicebox -i /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_30.hic -m /groups/cbdm-db/kb124/juicer-master_test/AWS/references/motif -g mm10"

------------------------------------------------------------

Successfully completed.

Resource usage summary:

    CPU time   :      0.33 sec.

The output (if any) is above this job summary.

Job <4646217> is submitted to queue <priority>.

------------------------------------------------------------
Sender: LSF System <lsfa...@clarinet002-068.orchestra>
Sender: LSF System <lsfa...@clarinet002-061.orchestra>
Subject: Job 4646207: <    #!/bin/bash;    #BSUB -q priority;    #BSUB -W 1200;    #BSUB -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out;    #BSUB -w " done(a1478156907_launch) ";    bsub -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out -q priority -W 1200  -w "exit(a1478156907_postproc) || exit(a1478156907_stats) || exit(a1478156907_hic) || exit(a1478156907_hic30)" -J "a1478156907_clean3" "bkill -q -J "a1478156907_prep_done 0; bkill -q priority 0;"> Exited

Job <    #!/bin/bash;    #BSUB -q priority;    #BSUB -W 1200;    #BSUB -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out;    #BSUB -w " done(a1478156907_launch) ";    bsub -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out -q priority -W 1200  -w "exit(a1478156907_postproc) || exit(a1478156907_stats) || exit(a1478156907_hic) || exit(a1478156907_hic30)" -J "a1478156907_clean3" "bkill -q -J "a1478156907_prep_done 0; bkill -q priority 0;"> was submitted from host <clarinet002-068.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-061.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:09:14 2016
Results reported at Thu Nov  3 03:09:15 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
    #!/bin/bash
    #BSUB -q priority
    #BSUB -W 1200
    #BSUB -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out
    #BSUB -w " done(a1478156907_launch) "
    bsub -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out -q priority -W 1200  -w "exit(a1478156907_postproc) || exit(a1478156907_stats) || exit(a1478156907_hic) || exit(a1478156907_hic30)" -J "a1478156907_clean3" "bkill -q -J "a1478156907_prep_done 0; bkill -q priority 0;"

------------------------------------------------------------

Exited with exit code 2.

Resource usage summary:

    CPU time   :      0.09 sec.

The output (if any) is above this job summary.


------------------------------------------------------------
Sender: LSF System <lsfa...@clarinet002-063.orchestra>
Subject: Job 4646208: <a1478156907_msplit0000_> Done

Job <a1478156907_msplit0000_> was submitted from host <clarinet002-066.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-063.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:08:54 2016
Results reported at Thu Nov  3 03:09:17 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
awk -f  /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/dups.awk -v name=/groups/cbdm-db/kb124/FC_02524_test/aligned/a1478156907_msplit0000_ /groups/cbdm-db/kb124/FC_02524_test/aligned/split0000;

------------------------------------------------------------

Successfully completed.

Resource usage summary:

    CPU time   :     22.46 sec.
    Max Memory :         6 MB
    Max Swap   :       338 MB

    Max Processes  :         4
    Max Threads    :         5

The output (if any) is above this job summary.


------------------------------------------------------------
Sender: LSF System <lsfa...@clarinet002-066.orchestra>
Subject: Job 4646209: <a1478156907_msplit0001_> Done

Job <a1478156907_msplit0001_> was submitted from host <clarinet002-066.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-066.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:09:21 2016
Results reported at Thu Nov  3 03:09:25 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
awk -f /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/dups.awk -v name=/groups/cbdm-db/kb124/FC_02524_test/aligned/a1478156907_msplit0001_ /groups/cbdm-db/kb124/FC_02524_test/aligned/split0001;

------------------------------------------------------------

Successfully completed.

Resource usage summary:

    CPU time   :      2.68 sec.
    Max Memory :         4 MB
    Max Swap   :       336 MB

    Max Processes  :         4
    Max Threads    :         5

The output (if any) is above this job summary.


------------------------------------------------------------
Sender: LSF System <lsfa...@clarinet002-067.orchestra>
Subject: Job 4646210: <a1478156907_catsplit> Done

Job <a1478156907_catsplit> was submitted from host <clarinet002-066.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-067.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:09:28 2016
Results reported at Thu Nov  3 03:09:31 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
cat /groups/cbdm-db/kb124/FC_02524_test/aligned/a1478156907_msplit*_optdups.txt > /groups/cbdm-db/kb124/FC_02524_test/aligned/opt_dups.txt;  cat /groups/cbdm-db/kb124/FC_02524_test/aligned/a1478156907_msplit*_dups.txt > /groups/cbdm-db/kb124/FC_02524_test/aligned/dups.txt;cat /groups/cbdm-db/kb124/FC_02524_test/aligned/a1478156907_msplit*_merged_nodups.txt > /groups/cbdm-db/kb124/FC_02524_test/aligned/merged_nodups.txt; 

------------------------------------------------------------

Successfully completed.

Resource usage summary:

    CPU time   :      0.51 sec.
    Max Memory :         3 MB
    Max Swap   :       329 MB

    Max Processes  :         4
    Max Threads    :         5

The output (if any) is above this job summary.


------------------------------------------------------------
Sender: LSF System <lsfa...@clarinet002-062.orchestra>
Sender: LSF System <lsfa...@clarinet002-064.orchestra>
Sender: LSF System <lsfa...@clarinet002-062.orchestra>
Subject: Job 4646213: <a1478156907_hic> Exited

Job <a1478156907_hic> was submitted from host <clarinet002-062.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-062.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:10:23 2016
Results reported at Thu Nov  3 03:10:28 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
df -h;export _JAVA_OPTIONS=-Xmx16384m; if [ -n "" ]; then /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/juicebox pre -s /groups/cbdm-db/kb124/FC_02524_test/aligned/inter.txt -g /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_hists.m -q 1 /groups/cbdm-db/kb124/FC_02524_test/aligned/merged_nodups.txt /groups/cbdm-db/kb124/FC_02524_test/aligned/inter.hic /groups/cbdm_lab/kb124/bedtools/mm10.chrom.sizes; else /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/juicebox pre -f /groups/cbdm-db/kb124/juicer-master/misc/mm10_MboI.txt -s /groups/cbdm-db/kb124/FC_02524_test/aligned/inter.txt -g /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_hists.m -q 1 /groups/cbdm-db/kb124/FC_02524_test/aligned/merged_nodups.txt /groups/cbdm-db/kb124/FC_02524_test/aligned/inter.hic /groups/cbdm_lab/kb124/bedtools/mm10.chrom.sizes; fi ;
------------------------------------------------------------

Exited with exit code 1.

Resource usage summary:

    CPU time   :      2.78 sec.

The output (if any) is above this job summary.


------------------------------------------------------------
Sender: LSF System <lsfa...@clarinet002-065.orchestra>
Sender: LSF System <lsfa...@clarinet002-064.orchestra>
Subject: Job 4546040: <a1478006353_fragmerge1> Exited

Job <a1478006353_fragmerge1> was submitted from host <clarinet002-063.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-064.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_sh> was used as the working directory.
Started at Tue Nov  1 16:54:27 2016
Results reported at Tue Nov  1 16:54:27 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
    #!/bin/bash
    #BSUB -q priority
    #BSUB -W 1200
    #BSUB -o /groups/cbdm-db/kb124/FC_02524_sh/lsf.out
    #BSUB -w " done(a1478006353_chimeric*) " 
    #BSUB -J "a1478006353_fragmerge1"
    bkill -q Clean2 0

------------------------------------------------------------

Exited with exit code 255.

Resource usage summary:

    CPU time   :      0.11 sec.

The output (if any) is above this job summary.

(-: Finished sorting all sorted files into a single merge.

------------------------------------------------------------
Sender: LSF System <lsfa...@clarinet002-068.orchestra>


KB

------------------------------------------------------------
Sender: LSF System <lsfa...@ottavino000-215.orchestra>
Subject: Job 4535172: <a1477977860_align2WT_1001.fastq> Exited

Job <a1477977860_align2WT_1001.fastq> was submitted from host <clarinet002-061.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <ottavino000-215.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_sh> was used as the working directory.
Started at Tue Nov  1 03:57:01 2016
Results reported at Tue Nov  1 05:58:05 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
#!/bin/bash
#BSUB -q priority
#BSUB -o /groups/cbdm-db/kb124/FC_02524_sh/lsf.out
        #BSUB -W 1200
#BSUB -R "rusage[mem=128000]"
#BSUB -J "a1477977860_align2WT_1001.fastq"
# Align read2
if [ -n "" ] || [ "0" -eq 2 ]
then
           echo 'Running command bwa aln -q 15 /groups/shared_databases/igenome/Mus_musculus/UCSC/mm10/Sequence/BWAIndex/genome.fa /groups/cbdm-db/kb124/FC_02524_sh/splits/WT_1_R2001.fastq > /groups/cbdm-db/kb1
...

--
You received this message because you are subscribed to the Google Groups "3D Genomics" group.
To unsubscribe from this group and stop receiving emails from it, send an email to 3d-genomics+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

Kush B.

unread,
Nov 3, 2016, 4:26:56 AM11/3/16
to 3D Genomics, kushag...@gmail.com
Sorry for the confusion, it was 1.4 million reads not 5 million.
Sender: LSF System <lsfadmin@clarinet002-061.orchestra>
Subject: Job 4646201: <a1478156907_cmd> Done

Job <a1478156907_cmd> was submitted from host <clarinet002-068.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-061.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:08:33 2016
Results reported at Thu Nov  3 03:08:34 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
    date
    echo "Juicer version:1.5" 
    echo "/groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/juicer.sh -g mm10 -d /groups/cbdm-db/kb124/FC_02524_test -q priority -p /groups/cbdm_lab/kb124/bedtools/mm10.chrom.sizes -y /groups/cbdm-db/kb124/juicer-master/misc/mm10_MboI.txt -z /groups/shared_databases/igenome/Mus_musculus/UCSC/mm10/Sequence/BWAIndex/genome.fa -D /groups/cbdm-db/kb124/juicer-master_test/AWS -r -S dedup"

------------------------------------------------------------

Successfully completed.

Resource usage summary:

    CPU time   :      0.10 sec.

The output (if any) is above this job summary.

Job <a1478156907_clean1> is not found
Job <4646208> is submitted to queue <priority>.
Job <4646209> is submitted to queue <priority>.
Job <4646210> is submitted to queue <priority>.
Job <4646211> is submitted to queue <priority>.

------------------------------------------------------------
Sender: LSF System <lsfadmin@clarinet002-066.orchestra>
Sender: LSF System <lsfadmin@clarinet002-062.orchestra>
Sender: LSF System <lsfadmin@clarinet002-064.orchestra>
Subject: Job 4646205: <a1478156907_postproc_wrap> Done

Job <a1478156907_postproc_wrap> was submitted from host <clarinet002-068.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-064.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:09:01 2016
Results reported at Thu Nov  3 03:09:02 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
    #!/bin/bash
    #BSUB -q priority
    #BSUB -W 1200
    #BSUB -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out
    #BSUB -w " done(a1478156907_launch) "
    #BSUB -J "a1478156907_postproc_wrap"
    bsub -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out -q priority -W 3600 -R "rusage[mem=32000]" -w "done(a1478156907_hic30)" -J "a1478156907_postproc" "/groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/juicer_postprocessing.sh -j /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/juicebox -i /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_30.hic -m /groups/cbdm-db/kb124/juicer-master_test/AWS/references/motif -g mm10"

------------------------------------------------------------

Successfully completed.

Resource usage summary:

    CPU time   :      0.33 sec.

The output (if any) is above this job summary.

Job <4646217> is submitted to queue <priority>.

------------------------------------------------------------
Sender: LSF System <lsfadmin@clarinet002-068.orchestra>
Sender: LSF System <lsfadmin@clarinet002-061.orchestra>
Subject: Job 4646207: <    #!/bin/bash;    #BSUB -q priority;    #BSUB -W 1200;    #BSUB -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out;    #BSUB -w " done(a1478156907_launch) ";    bsub -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out -q priority -W 1200  -w "exit(a1478156907_postproc) || exit(a1478156907_stats) || exit(a1478156907_hic) || exit(a1478156907_hic30)" -J "a1478156907_clean3" "bkill -q -J "a1478156907_prep_done 0; bkill -q priority 0;"> Exited

Job <    #!/bin/bash;    #BSUB -q priority;    #BSUB -W 1200;    #BSUB -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out;    #BSUB -w " done(a1478156907_launch) ";    bsub -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out -q priority -W 1200  -w "exit(a1478156907_postproc) || exit(a1478156907_stats) || exit(a1478156907_hic) || exit(a1478156907_hic30)" -J "a1478156907_clean3" "bkill -q -J "a1478156907_prep_done 0; bkill -q priority 0;"> was submitted from host <clarinet002-068.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-061.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:09:14 2016
Results reported at Thu Nov  3 03:09:15 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
    #!/bin/bash
    #BSUB -q priority
    #BSUB -W 1200
    #BSUB -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out
    #BSUB -w " done(a1478156907_launch) "
    bsub -o /groups/cbdm-db/kb124/FC_02524_test/lsf.out -q priority -W 1200  -w "exit(a1478156907_postproc) || exit(a1478156907_stats) || exit(a1478156907_hic) || exit(a1478156907_hic30)" -J "a1478156907_clean3" "bkill -q -J "a1478156907_prep_done 0; bkill -q priority 0;"

------------------------------------------------------------

Exited with exit code 2.

Resource usage summary:

    CPU time   :      0.09 sec.

The output (if any) is above this job summary.


------------------------------------------------------------
Sender: LSF System <lsfadmin@clarinet002-063.orchestra>
Subject: Job 4646208: <a1478156907_msplit0000_> Done

Job <a1478156907_msplit0000_> was submitted from host <clarinet002-066.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-063.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:08:54 2016
Results reported at Thu Nov  3 03:09:17 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
awk -f  /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/dups.awk -v name=/groups/cbdm-db/kb124/FC_02524_test/aligned/a1478156907_msplit0000_ /groups/cbdm-db/kb124/FC_02524_test/aligned/split0000;

------------------------------------------------------------

Successfully completed.

Resource usage summary:

    CPU time   :     22.46 sec.
    Max Memory :         6 MB
    Max Swap   :       338 MB

    Max Processes  :         4
    Max Threads    :         5

The output (if any) is above this job summary.


------------------------------------------------------------
Sender: LSF System <lsfadmin@clarinet002-066.orchestra>
Subject: Job 4646209: <a1478156907_msplit0001_> Done

Job <a1478156907_msplit0001_> was submitted from host <clarinet002-066.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-066.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:09:21 2016
Results reported at Thu Nov  3 03:09:25 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
awk -f /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/dups.awk -v name=/groups/cbdm-db/kb124/FC_02524_test/aligned/a1478156907_msplit0001_ /groups/cbdm-db/kb124/FC_02524_test/aligned/split0001;

------------------------------------------------------------

Successfully completed.

Resource usage summary:

    CPU time   :      2.68 sec.
    Max Memory :         4 MB
    Max Swap   :       336 MB

    Max Processes  :         4
    Max Threads    :         5

The output (if any) is above this job summary.


------------------------------------------------------------
Sender: LSF System <lsfadmin@clarinet002-067.orchestra>
Subject: Job 4646210: <a1478156907_catsplit> Done

Job <a1478156907_catsplit> was submitted from host <clarinet002-066.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-067.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:09:28 2016
Results reported at Thu Nov  3 03:09:31 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
cat /groups/cbdm-db/kb124/FC_02524_test/aligned/a1478156907_msplit*_optdups.txt > /groups/cbdm-db/kb124/FC_02524_test/aligned/opt_dups.txt;  cat /groups/cbdm-db/kb124/FC_02524_test/aligned/a1478156907_msplit*_dups.txt > /groups/cbdm-db/kb124/FC_02524_test/aligned/dups.txt;cat /groups/cbdm-db/kb124/FC_02524_test/aligned/a1478156907_msplit*_merged_nodups.txt > /groups/cbdm-db/kb124/FC_02524_test/aligned/merged_nodups.txt; 

------------------------------------------------------------

Successfully completed.

Resource usage summary:

    CPU time   :      0.51 sec.
    Max Memory :         3 MB
    Max Swap   :       329 MB

    Max Processes  :         4
    Max Threads    :         5

The output (if any) is above this job summary.


------------------------------------------------------------
Sender: LSF System <lsfadmin@clarinet002-062.orchestra>
Sender: LSF System <lsfadmin@clarinet002-064.orchestra>
Sender: LSF System <lsfadmin@clarinet002-062.orchestra>
Subject: Job 4646213: <a1478156907_hic> Exited

Job <a1478156907_hic> was submitted from host <clarinet002-062.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-062.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_test> was used as the working directory.
Started at Thu Nov  3 03:10:23 2016
Results reported at Thu Nov  3 03:10:28 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
df -h;export _JAVA_OPTIONS=-Xmx16384m; if [ -n "" ]; then /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/juicebox pre -s /groups/cbdm-db/kb124/FC_02524_test/aligned/inter.txt -g /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_hists.m -q 1 /groups/cbdm-db/kb124/FC_02524_test/aligned/merged_nodups.txt /groups/cbdm-db/kb124/FC_02524_test/aligned/inter.hic /groups/cbdm_lab/kb124/bedtools/mm10.chrom.sizes; else /groups/cbdm-db/kb124/juicer-master_test/AWS/scripts/juicebox pre -f /groups/cbdm-db/kb124/juicer-master/misc/mm10_MboI.txt -s /groups/cbdm-db/kb124/FC_02524_test/aligned/inter.txt -g /groups/cbdm-db/kb124/FC_02524_test/aligned/inter_hists.m -q 1 /groups/cbdm-db/kb124/FC_02524_test/aligned/merged_nodups.txt /groups/cbdm-db/kb124/FC_02524_test/aligned/inter.hic /groups/cbdm_lab/kb124/bedtools/mm10.chrom.sizes; fi ;
------------------------------------------------------------

Exited with exit code 1.

Resource usage summary:

    CPU time   :      2.78 sec.

The output (if any) is above this job summary.


------------------------------------------------------------
Sender: LSF System <lsfadmin@clarinet002-065.orchestra>
Sender: LSF System <lsfadmin@clarinet002-064.orchestra>
Subject: Job 4546040: <a1478006353_fragmerge1> Exited

Job <a1478006353_fragmerge1> was submitted from host <clarinet002-063.orchestra> by user <kb124> in cluster <hms_orchestra>.
Job was executed on host(s) <clarinet002-064.orchestra>, in queue <priority>, as user <kb124> in cluster <hms_orchestra>.
</tmp/.lsbtmp90016> was used as the home directory.
</groups/cbdm-db/kb124/FC_02524_sh> was used as the working directory.
Started at Tue Nov  1 16:54:27 2016
Results reported at Tue Nov  1 16:54:27 2016

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
    #!/bin/bash
    #BSUB -q priority
    #BSUB -W 1200
    #BSUB -o /groups/cbdm-db/kb124/FC_02524_sh/lsf.out
    #BSUB -w " done(a1478006353_chimeric*) " 
    #BSUB -J "a1478006353_fragmerge1"
    bkill -q Clean2 0

------------------------------------------------------------

Exited with exit code 255.

Resource usage summary:

    CPU time   :      0.11 sec.

The output (if any) is above this job summary.

(-: Finished sorting all sorted files into a single merge.

------------------------------------------------------------
Sender: LSF System <lsfadmin@clarinet002-068.orchestra>
...

Neva Durand

unread,
Nov 3, 2016, 7:59:37 AM11/3/16
to Kushagra Bansal, 3D Genomics

Hi Kushagra,

Here are threads about this problem:

http://stackoverflow.com/questions/23960451/java-system-preferences-under-different-users-in-linux
https://groups.google.com/forum/m/#!searchin/3d-genomics/user$20prefs/3d-genomics/G0RKXXSe-Oc

But your error is due to a space I think between the -D and user.home. It needs to be next to it: -Duser.home=/groups/cbdm-db/kb124/juicer-master_test/AWS/script/temp

Also you must have write access to that directory.

Best
Neva

On Thu, Nov 3, 2016 at 11:29 AM, Kushagra Bansal <kushag...@gmail.com> wrote:

Sorry, I am not getting what's directory for prefs here. I modified the script like this-

java -D user.home=/groups/cbdm-db/kb124/juicer-master_test/AWS/script/temp -Djava.io.tmpdir=/opt/juicer/tmp -Djava.awt.headless=true -Djava.library.path=`dirname $0`/lib64  -Xmx32000m -Xms8000m -jar `dirname $0`/juicebox_tools.7.0.jar $*

And, got the error-

Error: Could not find or load main class user.home=.groups.cbdm-db.kb124.juicer-master_test.AWS.scripts.temp

And, if I run juicebox without any modification, I get this error-

Not including fragment map
Nov 03, 2016 6:03:54 AM java.util.prefs.FileSystemPreferences$1 run
WARNING: Couldn't create user preferences directory. User preferences are unusable.
Nov 03, 2016 6:03:54 AM java.util.prefs.FileSystemPreferences$1 run
WARNING: java.io.IOException: No such file or directory
Failed to create user directory!
Exception in thread "main" java.lang.ExceptionInInitializerError
at juicebox.tools.utils.original.Preprocessor.preprocess(Preprocessor.java:186)
at juicebox.tools.clt.old.PreProcessing.run(PreProcessing.java:98)
at juicebox.tools.HiCTools.main(HiCTools.java:77)
Caused by: java.lang.NullPointerException
at juicebox.DirectoryManager.getHiCDirectory(DirectoryManager.java:113)
at juicebox.HiCGlobals.<clinit>(HiCGlobals.java:52)
... 3 more
Nov 03, 2016 6:03:56 AM java.util.prefs.FileSystemPreferences checkLockFile0ErrorCode
WARNING: Could not lock User prefs.  Unix error code 2.
Nov 03, 2016 6:03:56 AM java.util.prefs.FileSystemPreferences syncWorld
WARNING: Couldn't flush user prefs: java.util.prefs.BackingStoreException: Couldn't get file lock.


Kushagra



On Thu, Nov 3, 2016 at 5:49 AM, Neva Durand <ne...@broadinstitute.org> wrote:
Yes looks normal. 

You can modify the juicebox.sh script to put in the flag -D user.home=<directory for prefs>. This is so you get rid of that error. 

Then you can run juicebox.sh pre -s inter.txt -g inter_hists.m -q 1 merged_nodups.txt inter.hic mm9 
(I think it was mm9 you aligned against?)

That will create the .hic file and you can create one at mapq 30 via -q 30 (and adding appropriate endings to each)


On Thursday, November 3, 2016, Kushagra Bansal <kushag...@gmail.com> wrote:
Thanks Neva! Yes, I tried to modify AWS version so that it can work on Orchestra but guess was not completely successful. I am little newbie to Hi-C analysis. If you don't mind could you please explain, how do I proceed with merged_sort, merged_nodups and dups file. Also, this is inter.txt file looks like for 1.4 million reads. Does it look normal. 

Experiment description: 
Sequenced Read Pairs:  1,489,951
 Normal Paired: 1,155,173 (77.53%)
 Chimeric Paired: 0 (0.00%)
 Chimeric Ambiguous: 0 (0.00%)
 Unmapped: 334,778 (22.47%)
 Ligation Motif Present: 289,079 (19.40%)
Alignable (Normal+Chimeric Paired): 1,155,173 (77.53%)
Unique Reads: 1,138,383 (76.40%)
PCR Duplicates: 16,316 (1.10%)
Optical Duplicates: 474 (0.03%)
Library Complexity Estimate: 40,473,765
Intra-fragment Reads: 179,055 (12.02% / 15.73%)
Below MAPQ Threshold: 297,243 (19.95% / 26.11%)
Hi-C Contacts: 662,085 (44.44% / 58.16%)
 Ligation Motif Present: 14,413  (0.97% / 1.27%)
 3' Bias (Long Range): 82% - 18%
 Pair Type %(L-I-O-R): 25% - 25% - 25% - 25%
Inter-chromosomal: 211,483  (14.19% / 18.58%)
Intra-chromosomal: 450,602  (30.24% / 39.58%)
Short Range (<20Kb): 108,896  (7.31% / 9.57%)
Long Range (>20Kb): 341,706  (22.93% / 30.02%)

Thank you!

Kushagra

Neva Durand

unread,
Nov 3, 2016, 8:08:52 AM11/3/16
to Kushagra Bansal, 3D Genomics
Yes - with only 1.4M reads, there's no point to doing any feature annotation (Arrowhead or HiCCUPS), which are traditionally run after "pre".

Yes, you can load them directly with the Local button or put them on a web server somewhere and load them via URL.

On Thu, Nov 3, 2016 at 1:06 PM, Kushagra Bansal <kushag...@gmail.com> wrote:
Thank you, Neva! Yes, I figured out that problem. Also, I needed to change the directory for 

-Djava.io.tmpdir=/opt/juicer/tmp 

to

-Djava.io.tmpdir=/groups/cbdm-db/kb124/Java/tmp

Once I get .hic file, those are the end files? And, I can directly use them on windows based juicebox program?

Kushagra


Reply all
Reply to author
Forward
0 new messages