Hi all,
The -otutab command (~/Usearch/usearch10.0_32bit) aborts after a few minutes from starting, because the size of the file that it creates while mapping the reads (293F751R_CTMS07AP.fasta) against the -zotus, grows exponentially and gets ridicolously large very quickly (>7GB!).
This is the error message I get in virtualbox:
terminate
called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted (core
dumped)
I also tried -usearch_global (~/Usearch/usearch9.2_64bit) but I'm having the same issue (process gets killed).
I have no idea why this is happening becasue I compared the size of all the input files and the number of reads, ZOTUs etc, and everyhting looks "normal" (i.e., similar to, or even smaller than previous analyses that I ran successfully). I have attached an excel sheet with some stats collected during two different bioirnformatics analyses (MS06AP was successful and MS07AP is the one causing problems)
I can add that this Miseq run includes two amplicons, generated by two primer sets targeting two 16S regions (primers 293F751R and primers 1328F1664R).
I tried running the whole bioinformatics pipeline either 1) keeping the two amplicons together or 2) splitting the two amplicons, but in both cases I still have the same issue explained above.
These are the commands that I tried and that don't work:
~/Usearch/usearch10.0 -otutab 293F751R_CTMS07AP.fasta -zotus unoise3/293F751R_ZOTU_no_chimera_labelled.fasta -strand both -sizeout -notmatched unoise3/otutab/293F751R_unmapped_reads.fasta -dbmatched unoise3/otutab/293F751R_matched_ZOTUs_with_sizes.fasta -biomout unoise3/otutab/293F751R_CTMS07AP_ZOTU_table.json -mapout unoise3/otutab/293F751R_CTMS07AP_ZOTU_map.txt;
~/Usearch/usearch9.2_64bit -usearch_global 293F751R_CTMS07AP.fasta -db unoise3/293F751R_ZOTU_no_chimera_labelled.fasta -strand both -sizeout -id 0.97 -otutabout 293F751R_CTMS07AP_ZOTU_table.txt
Thanks
Andrea