Read length error handling

24 views
Skip to first unread message

Richard Casey

unread,
May 1, 2015, 12:07:40 PM5/1/15
to nvbio...@googlegroups.com
In some nvBowtie runs, depending on the dataset, we get this error:


[01;31merror   : [0] unsupported read length 786 (maximum is 512)
^[[01;37mvisible : [0] nvBowtie cuda driver... done
^[[22;36mstats   : [0]   total        : 0.37 sec (avg: 0.0K reads/s).
^[[22;36mstats   : [0]   mapping      : 0.00 sec (avg: -nanM reads/s, max: 0.000M reads/s, 0.00 device sec).
^[[22;36mstats   : [0]   selecting    : 0.00 sec (avg: -nanM reads/s, max: 0.000M reads/s, 0.00 device sec).
^[[22;36mstats   : [0]   sorting      : 0.00 sec (avg: -nanM seeds/s, max: 0.000M seeds/s, 0.00 device sec).
^[[22;36mstats   : [0]   scoring      : 0.00 sec (avg: -nanM seeds/s, max: 0.000M seeds/s, 0.00 device sec).
^[[22;36mstats   : [0]   locating     : 0.00 sec (avg: -nanM seeds/s, max: 0.000M seeds/s, 0.00 device sec).
^[[22;36mstats   : [0]   backtracking : 0.00 sec (avg: -nanM reads/s, max: 0.000M reads/s, 0.00 device sec).
^[[22;36mstats   : [0]   finalizing   : 0.00 sec (avg: -nanM reads/s, max: 0.000M reads/s, 0.00 device sec).
^[[22;36mstats   : [0]   results DtoH : 0.00 sec (avg: -nanM reads/s, max: 0.000M reads/s).
^[[22;36mstats   : [0]   results I/O  : 0.00 sec (avg: -nanM reads/s, max: 0.000M reads/s).
^[[22;36mstats   : [0]   reads HtoD   : 0.00 sec (avg: -nanM reads/s, max: 0.000M reads/s).
^[[22;36mstats   : [0]   reads I/O    : 0.37 sec (avg: 0.449M reads/s, max: 0.449M reads/s).


After this error occurs the jobs appear to continue running, however, it's not really clear how this error is handled.  Does nvBowtie simply ignore this record (or records) and continue?  From the statements after this error, it looks like nothing is happening.  Maybe better error handling or reporting would clarify this condition. 

Jacopo Pantaleoni

unread,
May 5, 2015, 11:41:06 AM5/5/15
to Richard Casey, nvbio...@googlegroups.com
Hi Richard,

it is true that currently there is a limitation on read length - and though I've never tested it, it's possible that instead of properly stopping some
kernels keep being launched.

I will verify this - thanks for the report.

-jacopo

--
You received this message because you are subscribed to the Google Groups "nvbio-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to nvbio-users...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Richard Casey

unread,
May 6, 2015, 12:44:16 PM5/6/15
to nvbio...@googlegroups.com
No problem.  I have a simple Python script that preprocesses the input file and removes records > 512 bp.  There was in fact a single offending record.  The script removed it and now the job runs OK.  But yes it could be useful if the error handler stopped the job, or whatever.
Reply all
Reply to author
Forward
0 new messages