Correcting for gene length allows you to compare genes within the same sample. So, for example, if you want to estimate the coverage of a pathway in a sample by the mean coverage of its component genes, then you wouldn't want a single large gene in a pathway (that recruits a lot of reads) to bias your estimate of the pathway's abundance. Similarly, if you want to profile a strain of a species in a metagenome, it's helpful to check for uniform coverage of its component genes (which requires adjusting for their individual lengths).
If you just want to know if a gene/transcript is differentially abundant between two types of samples (which is what those workflows are focused on), you have to correct for uneven sample read depth, but not necessarily gene length. For example, if all of my samples have 1M reads, and gene X recruits a mean of 2000 reads in cases and 1000 reads in controls, then its abundance is doubled in cases. You could normalize both means to the length of gene X, but it wouldn't change that conclusion. However, if another gene Y had mean abundance 2000 in controls, it would not be appropriate to say it was "twice as abundant as X" without normalizing for their respective lengths (Y might simply be twice as long as X).
The alignment quality issue is more specialized to metagenomics. In closed-reference RNA-seq you typically say that a read maps to one of N genes or it doesn't. When you're dealing with a pool of potentially novel organisms, the decision of whether a read mapped with sufficient homology (and if so, where) is more nuanced. That said, dividing read weights over candidate database sequences can also arise in RNA-seq. See, for example, EM approaches to dealing with overlapping isoforms.
Hope this helps!