--
You received this message because you are subscribed to the Google Groups "ABySS" group.
To unsubscribe from this group and stop receiving emails from it, send an email to abyss-users...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
==> test.log <==
0: Read 8900000 reads. 0: Hash load: 232351264 / 536870912 = 0.433 using 8.09 GB
0: Read 8700000 reads. 0: Hash load: 229847765 / 536870912 = 0.428 using 8 GB
0: Read 8900000 reads. 0: Hash load: 232351264 / 536870912 = 0.433 using 8.09 GB
0: Read 8700000 reads. 0: Hash load: 229847765 / 536870912 = 0.428 using 8 GB
0: Read 8100000 reads. 0: Hash load: 221936385 / 536870912 = 0.413 using 7.74 GB
0: Read 8900000 reads. 0: Hash load: 232351264 / 536870912 = 0.433 using 8.09 GB
0: Read 8900000 reads. 0: Hash load: 232351264 / 536870912 = 0.433 using 8.09 GB
0: Read 8900000 reads. 0: Hash load: 232351264 / 536870912 = 0.433 using 8.09 GB
0: Read 8200000 reads. 0: Hash load: 223297651 / 536870912 = 0.416 using 7.78 GB
0: Read 8700000 reads. 0: Hash load: 229847765 / 536870912 = 0.428 using 8 GB
==> test_old.log <==
0: Read 111600000 reads. 0: Hash load: 35823087 / 268435456 = 0.133 using 1.4 GB
1: Read 109300000 reads. 1: Hash load: 35238609 / 268435456 = 0.131 using 1.38 GB
0: Read 111700000 reads. 0: Hash load: 35831868 / 268435456 = 0.133 using 1.4 GB
1: Read 109400000 reads. 1: Hash load: 35247089 / 268435456 = 0.131 using 1.38 GB
0: Read 111800000 reads. 0: Hash load: 35840588 / 268435456 = 0.134 using 1.4 GB
1: Read 109500000 reads. 1: Hash load: 35254985 / 268435456 = 0.131 using 1.38 GB
0: Read 111900000 reads. 0: Hash load: 35849269 / 268435456 = 0.134 using 1.4 GB
1: Read 109600000 reads. 1: Hash load: 35263042 / 268435456 = 0.131 using 1.38 GB
0: Read 112000000 reads. 0: Hash load: 35857394 / 268435456 = 0.134 using 1.4 GB
1: Read 109700000 reads. 1: Hash load: 35271415 / 268435456 = 0.131 using 1.38 GB
Dear all,it looks like a memory problem since every process arrive to ask for 8 GB ram and more... while previous versions no...example:0: Read 8700000 reads. 0: Hash load: 229847765 / 536870912 = 0.428 using 8 GBany hint on this?Thanks
On Friday, October 3, 2014 12:35:39 PM UTC+2, Luca Cozzuto wrote:Dear all,I noticed that abyss is consuming much more RAM than in past...I'm using the same command line and the same dataset and with abyss 1.3.7 I managed to assemble with a peak of 40G, while with the 1.5.2 I was unable to assemble with 120G available.Any hint on this?Thank youLuca
./configure --prefix=$HOME/abyss --with-mpi=/usr/include/openmpi-x86_64/ CPPFLAGS=-I/software/bi/el6.3/sparsehash-2.0.2/include --enable-maxk=96
(I also used 63 as maxk but I had the same problem)
then the command line:
abyss-pe v=-v name=test k=23 n=10 np=16 l=30 ALIGNER_OPTIONS='-k24' lib='pe1' pe1='fastq1.fq fastq2.fq' > test.log
I tried several k values for reducing the memory...
Thanks again,
Luca
I am having what I think is a similar issue with both versions 1.3.7 and 1.5.2.
To compare contig assemblies of different kmers, I generally run the unitig step (using mpirun -np annd ABYSS-P) on many cores across a few WestmereEP computers with 96GB RAM each, and when they are finished I run abyss-fac to obtain N50 statistics and such. For paired-end assembly, I run abyss-pe with the best performing kmer value. I have access to a large shared memory computer with Intel(R) Xeon(R) CPU X7560 @ 2.27GHz and 1TB of RAM. However, during the Hash Table loading process, it seems to take a very long time, which I know it should, but it also seems it is using very little of the available RAM so I am wondering why that could be.
This is the command I run for abyss-pe:
/home/mtollis/genome_assembly/abyss-1.3.7/bin/abyss-pe np=32 k=63 E=0 s=200 n=3 v=-v l=27 name=Aapl20A_k63 C=/scratch/mtollis/Aapl_v3.0/genome/ABYSS/unitig/63 lib='pe180_L2 pe180_L8 pe1kb pe653 pe500' pe180_L2="$PE_180_L2_R1 $PE_180_L2_R2" pe180_L8="$PE_180_L8_R1 $PE_180_L8_R2" pe1kb="$pe1kb_R1 $pe1kb_R2" pe653="$pe653_R1 $pe653_R2" pe500="$pe500bp_R1 $pe500bp_R2" se="$se180_L2_flash $se180_L2_notcomb $se180_L2_S1 $se180_L2_S2 $se180_L8_flash $se180_L8_notcomb $se180_L8_S1 $se180_L8_S1 $se180_L8_S2 $se1kb_S1 $se1kb_S2 $se1kb_S3 $se653kb_S1 $se653kb_S2 $se653kb_S3 $se500bp_S1 $se500bp_S2 $se500bp_S3"
Here is some of the output after it successfully builds the .dist files for two libraries:
Reading from standard input...
Reading `Aapl20A_k63-3.fa'...
Using 1.82 GB of memory and 86.2 B/sequence.
Reading `Aapl20A_k63-3.fa'...
Building the suffix array...
Building the Burrows-Wheeler transform...
Building the character occurrence table...
Read 3.97 GB in 21151601 contigs.
Using 36.7 GB of memory and 9.24 B/bp.
Read 7 alignments. Hash load: 3 / 5 = 0.6 using 545 kB.
Read 10 alignments. Hash load: 6 / 11 = 0.545455 using 545 kB.
Read 484344 alignments. Hash load: 12 / 23 = 0.521739 using 545 kB.
Read 1000000 alignments. Hash load: 2 / 23 = 0.0869565 using 545 kB.
Read 2000000 alignments. Hash load: 2 / 23 = 0.0869565 using 545 kB.
Read 3000000 alignments. Hash load: 2 / 23 = 0.0869565 using 545 kB.
Read 4000000 alignments. Hash load: 4 / 23 = 0.173913 using 545 kB.
Read 5000000 alignments. Hash load: 2 / 23 = 0.0869565 using 545 kB.
Read 6000000 alignments. Hash load: 4 / 23 = 0.173913 using 545 kB.
Read 7000000 alignments. Hash load: 4 / 23 = 0.173913 using 545 kB.
Read 8000000 alignments. Hash load: 0 / 23 = 0 using 545 kB.