Unfortunately my institution doesn't allow me to access Nature Methods and I probably won't ILL the letters, but it is interesting knowing about this exchange.
Personally, I think that in amplicon sequencing, especially when sampling from similar habitats the assumption must be that a zero simply represents a missed observation.
That said, others I have talked to throw out a lot of low-abundance stuff due to the mixed clustering issue unless using confirmatory indexes (unique indexes for both indexing reads) as opposed to simple combinatorial indexes or as my dissertation data were generated (gulp), single indexes. When low-abundance OTUs are simply discarded, less of the community can be addressed, and yet the conclusions may be more confident as that zero count "missed observation" can be pushed toward the "not present" camp as far as confident observations are concerned (certainty that the sequence belongs to a given sample).
Subsampling should give you a better estimate of individual OTU proportions when doing group comparisons as opposed to total beta diversity. The overdispersion corrected for by normalizing has a lot to do with simply detecting more OTUs due to greater depth, but normalizing is also skewing the observed abundances of your data, so won't yield valid proportions for abundance comparison. Amend et al (2010?) showed these "quasi-quantitative" (is that how they put it?) differences are important to observing community differences, even if they aren't necessarily the true proportions.
I also think that removing singletons or "low-count-tons" is good practice. With my single indexed data, it is absolutely insufficient, since samples can "share" observations due to indexing inaccuracy on the sequencer. Trim that noise right away. Also kill unshared OTUs since they could easily be PCR artifacts. If you think it is a real sequence, you need to get more DNA (fresh extraction) from the suspect location, and resequence a few true replicates. For those of us with single-indexed data, the general conclusions you will observe are correct, but there will be increasing doubt as to which habitat an OTU was observed in as its observation counts diminish.
One other thing I would add to this discussion is that very stringent quality filtering seems to remove a lot of the noise I otherwise observe (singletons/doubletons etc that said tutorial might otherwise retain). I typically throw out 70-80% of data in favor of those sequences that are only of exceptional quality. Sure this reduces my depth, so am I seeing an effect of count? Maybe, but my final depths are still greater than 1000 most of the time which according to the Weiss manuscript is adequate to avoid most count-based issues.
Agree on subsampling being imperfect, but I wouldn't discount it entirely either as it allows us to get the data to fit certain assumptions without which we can't perform essential statistical tests.