Following QIIME's http://qiime.org/tutorials/chaining_otu_pickers.html
tutorial while trying a meta-analysis, I'm not sure whether I'm aiming at the right strategy.
I can either OTU-pick each sample individually, eventually ending up with a lot of otu_maps to merge or merge the .fna's together by study and close-reference OTU pick each one, ending up with only two files to merge.
Currently, I'm trying the first strategy, but getting this error: Some keys do not map ('1397') -- is the order of your OTU maps
equivalent to the order in which the OTU pickers were run? If
expanding a failures file, did you remember to leave out the otu
map from the run which generated the failures file?
Using SRA-downloaded data. Assuming already demultiplexed and thus only running convert_fastaqual_fastq.py. Then running split_libraries with individual mapping files by sample name.
Using SILVA 123 as reference.
Using bash and python to run QIIME along several files (as in split_libraries)
First OTU-pick = prefix_suffix, second = SILVA 123 + uclust_ref
Anyone with tips on this, in the context of a meta-analysis (dealing with several samples from multiple studies)?