perManova: fixed vs random effects and variance partitioning

1,095 views
Skip to first unread message

Nina Nikolic

unread,
Apr 14, 2016, 7:43:23 AM4/14/16
to PC-ORD

Dear colleagues,

if I have two independent factors and wish to see if they have a significant effect on multiple response variables by using perManova,  should I definitively ignore the estimated components of variance?


Briefly, i wish to use perMANOVA on non-community data, to look for differences in mineral composition of Cabernet leaves (20 parameters), as affected by rootstock (2 types) and sampling time (five sampling dates for all the samples – called "phase"). Thus, concentrations of all mineral elements (adjusted to standard deviate; Euclidean distance used) actually characterizes my samples, much like "normal" species abundance data. PerMANOVA is tempting because of making no assumptions on normality and variance homogeneity as compared to classic two-way ANOVA.


My sampling design looks like this:

--------------------------------------------------------------------

rootstock   phase          SU        N%       P%       K%    etc…

--------------------------------------------------------------------

1             1            SU1          

1             2            SU2

1             3            SU3    

1             4            SU4

1             5            SU5

2             1            SU6

2             2            SU7

2             3            etc…

2             4

2             5

--------------------------------------------------------------------

 

1.           Is this a proper two-way design? Not nested, but with two independent, fixed factors? I wanted it to be so, anyway.

2.           I have carefully read the Help and the Book on the topic of fixed and random effects, and the difference is still somewhat blurred for me, involving much of subjectivity. I mean, almost any factor could theoretically have a random effect, but the way I look at it (my design) determines whether it should be treated as fixed or random. Is this essentially correct?

3.           The Help says: “Variance component: When the variance of an observed variable can be decomposed into additive parts, these are known as variance components”. Does it mean that whenever in a 2-way factorial perManova the interaction term happens to be statistically significant, variance partitioning provided by the software should be ignored (because significant interaction implies NO ADDITIVITY of factor effects)?

4.           Finally, I have my two fixed factors and should thus ignore the variance components provided. OK, but is it then correct to estimate the proportion of variance attributable to my factors from the SS? For instance, is it ok to say that factor  “phase” accounts for about 61% of variability (Phase SS/ Total SS)? And the factor “rootstock” for about 7%. Please see the Table.

 

Randomization test of significance of pseudo F values

 

--------------------------------------------------------------------

Source     d.f.        SS           MS           F           p *   

--------------------------------------------------------------------

phase        4     390.74       97.685       23.674         0.000200

rootstock    1     45.522       45.522       11.032         0.000200

Interac.     4     35.683       8.9208       2.1619         0.014400

Residual    40     165.05       4.1263   

Total       49     637.00   

--------------------------------------------------------------------

 

Thanks a lot!!!!

Bruce McCune

unread,
Apr 15, 2016, 10:53:17 AM4/15/16
to pc-...@googlegroups.com
Nina, Both phase and rootstock seem like fixed effects to me, with one qualification: if you are resampling the same vine at each phase, then there is a third factor "vine" that is a random effect. Successive samples from the same vine are not independent, and some vines might run high or low throughout the experiment. But if you are randomly sampling vines for each phase, your design looks exactly right to me.

Traditionally statisticians have not calculated variance components for fixed effects. I think the idea is that for random effects, you have randomly sampled a population, making it more interesting to know how much of the population variance is tied up with that factor. In contrast, if the experimenter is fixing a few levels of a factor, it's not representing a population, and thus the amount of variance associated with that factor is artificially set by the experimenter.

For your number 4, yes, you can calculate those proportions based on sums of squares. But traditionally the F ratios are viewed as more meaningful, since they pit one factor against another having taken into consideration the degrees of freedom. You can think of the F ratio as a signal-to-noise ratio.

Bruce McCune



--
You received this message because you are subscribed to the Google Groups "PC-ORD" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pc-ord+un...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Nina Nikolic

unread,
Apr 19, 2016, 4:08:54 AM4/19/16
to PC-ORD
Dear Prof. McCune, thanks a lot!

Yes, I read previously on this forum that F statistic in perManova can be interpreted as a signal-to-noise ratio. But I don't quite the concept. In my case, for instance, if the F ratio for the factor PHASE is 23.7 (Table in my previous post), does it mean that systematic variance in my samples which is solely due to the effect of "sampling phase" is about 24 times higher than random variance? Or...?

It is very important for me to grasp the concept correctly, and I do appreciate your help!

have a nice day,
nina





Bruce McCune

unread,
Apr 19, 2016, 10:34:15 AM4/19/16
to pc-...@googlegroups.com
Sounds good to me. Or put another way, the variation in the signal from factor of interest is about 23.7X the variation in the noise.

By the way, my interpretation of F as signal-to-noise is my own way of thinking about that, borrowing a concept from engineering that can be applied to many different kinds of data in many different ways. The nice thing about it for ANOVA is that it gives us a universally scaled measure of the effect size that transcends the specifics of differences in means.

Bruce

Nina Nikolic

unread,
Apr 24, 2016, 2:13:17 PM4/24/16
to PC-ORD
Thanks for a clear, concise and very helpful answer, as always!
The way of seeing the F ratio as signal-to-noise is a new insight for me, very intuitive and informative. It's such a fun to experience PC-ORD outperforming  huge softwares (I tried Statistica 6) for non-community data! Well, at least in being intuitive, logical and user friendly. I consider myself truly privileged. Thanks!
Reply all
Reply to author
Forward
0 new messages