nvBWT crashed

86 views
Skip to first unread message

Ying Zhang

unread,
Jul 24, 2014, 11:13:31 AM7/24/14
to nvbio...@googlegroups.com
Hi,

I am posting my issue again.

I am running nvBWT on a system consisting of a Dell R710 head/login node with 48 GiB of memory, eight Dell compute nodes each with dual X5675 six-core 3.06 GHz processors and 96 GiB of memory, and 32 Nvidia M2070 GPGPUs.  I am using the Tesla GPU cards.

And below is the error message:

info    : max length : -1
info    : input      : "hg19.fa"
info    : output     : "hg19"
verbose :   cuda devices : 4
verbose :   device 0 has compute capability 2.0
verbose :     SM count          : 14
verbose :     SM clock rate     : 1147 Mhz
verbose :     memory clock rate : 1.6 Ghz
verbose :   device 1 has compute capability 2.0
verbose :     SM count          : 14
verbose :     SM clock rate     : 1147 Mhz
verbose :     memory clock rate : 1.6 Ghz
verbose :   device 2 has compute capability 2.0
verbose :     SM count          : 14
verbose :     SM clock rate     : 1147 Mhz
verbose :     memory clock rate : 1.6 Ghz
verbose :   device 3 has compute capability 2.0
verbose :     SM count          : 14
verbose :     SM clock rate     : 1147 Mhz
verbose :     memory clock rate : 1.6 Ghz
verbose :   chosen device 3
verbose :     device name        : Tesla M2070
verbose :     compute capability : 2.0
info    : device mem : total: 5.2 GB, free: 5.2 GB
info    : directory  : ""
info    :
info    : counting bps... started
info    :   counting "hg19.fa"
info    : counting bps... done
info    :
info    : stats:
info    :   reads           : 93
info    :   sequence length : 3137161264 bps (748.0 MB)
info    :   buffer size     : 1568.6 MB
info    :
info    : buffering bps... started
info    :   buffering "hg19.fa"
info    : buffering bps... done
error   : caught a std::runtime_error exception:
error   :



Best,
Ying

W Langdon

unread,
Mar 13, 2015, 1:20:34 PM3/13/15
to nvbio...@googlegroups.com

EunCheon Lim

unread,
May 30, 2016, 11:05:31 AM5/30/16
to nvbio-users
I have the same runtime error with Ying.
The runtime error is highly likely to be occurred since nvBWT cannot utilize the whole 48-Gb GPU memory but only 5.2Gb.

I am using 8 Tesla K80 node with 96-Gb GPU memory, but it crashes and the reason of runtime error was due to a lack of memory.

How to enable nvBWT to use whole memory installed in the computing node?

I confirm that with a smaller size input sequence, i.e., 100Mb, nvBWT does not fail.
Reply all
Reply to author
Forward
0 new messages