Colin - I should add this bit of information...I originally had 171 samples. I filtered my samples so that I only worked with ones with 15000 sequences, however the variance in library size is large. Anyhow, I ran this the normalize_table.py script on my raw OTU table. When I imported it into phyloseq it gave this error:
> ## Merge the three objects together in phyloseq
> testdata=merge_phyloseq(biomfile,tree,map)
> print(testdata)
phyloseq-class experiment-level object
otu_table() OTU Table: [ 90862 taxa and 148 samples ]
sample_data() Sample Data: [ 148 samples by 8 sample variables ]
tax_table() Taxonomy Table: [ 90862 taxa by 7 taxonomic ranks ]
phy_tree() Phylogenetic Tree: [ 90862 tips and 90848 internal nodes ]
> # Alpha_diversity #Best Plots
> plot_richness(testdata, x="Description", color="Site_Tissue", measures=c("Chao1", "Shannon"))
Error in estimateR.default(newX[, i], ...) :
function accepts only integers (counts)
In addition: Warning message:
In estimate_richness(physeq, split = TRUE, measures = measures) :
The data you have provided does not have
any singletons. This is highly suspicious. Results of richness
estimates (for example) are probably unreliable, or wrong, if you have already
trimmed low-abundance taxa from the data.
We recommend you find the un-trimmed data and retry.
So the script worked but it seems to have removed my singletons. I should also mention that I ran the filter_otus_through_otu_table.py script in a old workflow using the setting n=2 for singleton removal and I noticed my files did not decrease in size. Is it possible I never had many to begin with?