Blocked PerMANOVA question

48 views
Skip to first unread message

Kenneth McCravy

unread,
Dec 16, 2024, 10:19:16 AM12/16/24
to PC-ORD
Hello,

I'm comparing ground beetle species composition in a randomized complete block design with three different ground cover treatments and four replications (blocks). The sample size is relatively small (n=371); 18 species were collected. I ran a blocked PerMANOVA, and got a P = 0.0094 overall for ground cover, but no significant differences in paired comparisons. I'm concerned about the number of unique values, which seems very low:

Number of unique values of the test statistic obtained in the permutations
--------------------------------------------------
                    Number of     Unique           Percent
   Factor     permutations  values of F    unique
--------------------------------------------------
Block                      4999         576            11.5223
Ground Cover        4999         216              4.3209
--------------------------------------------------

I'm not really sure what this means and if the low # of unique values compromises or invalidates the results. Any help would be appreciated.

Thanks!
Ken McCravy

Bruce McCune

unread,
Dec 16, 2024, 11:24:53 AM12/16/24
to pc-...@googlegroups.com
Ken, I'm guessing that by "sample size" you are referring to the total number of beetles encountered. With a randomized complete block design, it is more helpful to think of the number of blocks as your sample size (n=4). Given that you have 3 groups (treatments), that means that you have 12 permutable items (the rows in your main matrix). So yes, the number of ways that the rows can be permuted is going to be small, whether you are testing for differences among blocks or differences among groups. So even though you permuted the group assignments about 5000 times, you are redoing the same permutations a lot and coming up with the same F ratios over and over.  Having said that, it doesn't invalidate the results, you just have to recognize that you have low power to detect differences (that's back to the n=4 problem). And it also means that your sample size is too small to come up with a p value that has substantial precision. So I would suggest taking the p values with a grain of salt. However, you can think of the F-ratio as a measure of effect size (a signal to noise ratio that contrasts between groups vs. within groups) that should be suggestive of how much your communities appear to differ among treatments. But havimg a couple more blocks would have made that much more convincing.

I should add that different investigators could see these results in different ways -- perhaps there are other readers in the group who would care to comment.

Bruce McCune

--
You received this message because you are subscribed to the Google Groups "PC-ORD" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pc-ord+un...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/pc-ord/66c8bbf8-c910-4bdd-bcb6-03f4882d510an%40googlegroups.com.
Message has been deleted

Kenneth McCravy

unread,
Dec 17, 2024, 4:11:09 PM12/17/24
to PC-ORD
Many thanks Bruce, that is very helpful. Sorry for the misuse of "sample size"!

I should have included this info in the first message, but "Ground Cover" had an F = 6.2424, P = 0.0094. The "Blocks" component of the analysis was F = 0.5586, P = 0.9206, with a greater number of unique values of F (although still pretty low, 576 [11.5%] vs 216 [4.3%] for "Ground Cover"). Since the "Blocks" P-value is so high, I'm wondering if we could safely ignore "Blocks" and just run a regular completely randomized PerMANOVA, which I'm assuming would have a greater number of unique values of F and therefore more statistical power.

Thanks Again!
Ken

Bruce McCune

unread,
Dec 17, 2024, 5:07:32 PM12/17/24
to pc-...@googlegroups.com
Ken, it does seem like you can safely ignore the blocks in this case. It's rather unusual that the block term is so weak!
Bruce McCune

Message has been deleted
Message has been deleted

Kenneth McCravy

unread,
Dec 20, 2024, 9:03:31 AM12/20/24
to PC-ORD
Hi,

I sent the message shown below yesterday, but it doesn't seem to have made it through, so I am re-sending. Thanks.

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Hello Again Bruce and Others,

In reading over the one-way PerMANOVA output, I came across a couple of things that I don't really understand:

1) Under the "Statistics from randomizations" table, there is this statement:

* proportion of randomized trials with indicator value
  equal to or exceeding the observed indicator value.
  p = (1 + n)/(1 + N)
  n = number of runs >= observed
  N = number of randomized runs

I'm not sure why a statement on indicator values would be in the PerMANOVA output.

2) The "Pairwise Comparisons" table gives these results:

PAIRWISE COMPARISONS for factor Ground Cover
Notes: p values are not corrected for multiple comparisons.
       Only first 12 characters of value labels used in table.
------------------------------------------------------------------------
Level        vs. Level                 t              p                
------------------------------------------------------------------------
Herb         vs. Mowed                4.2024        0.030600
Herb         vs. Unman                2.8283        0.027600
Mowed        vs. Unman                1.6609        0.115600
------------------------------------------------------------------------

It seems odd to me that a higher t-value would produce a higher p-value (4.2024; 0.0306) than does a lower t-value (2.8283; 0.0276).

Thanks Again!
Ken

Bruce McCune

unread,
Dec 20, 2024, 11:00:15 AM12/20/24
to pc-...@googlegroups.com
Ken,
1. Thanks for catching this. I'm sure that this is just a cut/paste error in re-using code -- to be fixed. But fear not, the logic is the same.
2. If the p value was derived from a parametric distribution and the design was balanced, that would be weird. But remember in the case of permutation based statistics, each p value comes from an empirical distribution constructed from the permutation process -- that is going to depend on the particulars of the data. Nevertheless, the idea you suggest, that as t increases p will go down, will be _generally_ true.
Bruce McCune

Kenneth McCravy

unread,
Dec 21, 2024, 10:42:35 AM12/21/24
to PC-ORD
Hello Again Bruce and Others,

In reading over the one-way PerMANOVA output, I came across a couple of things that I don't really understand:

1) Under the "Statistics from randomizations" table, there is this statement:

* proportion of randomized trials with indicator value
  equal to or exceeding the observed indicator value.
  p = (1 + n)/(1 + N)
  n = number of runs >= observed
  N = number of randomized runs

I'm not sure why a statement on indicator values would be in the PerMANOVA output.

2) The "Pairwise Comparisons" table gives these results:

PAIRWISE COMPARISONS for factor Ground Cover
Notes: p values are not corrected for multiple comparisons.
       Only first 12 characters of value labels used in table.
------------------------------------------------------------------------
Level        vs. Level                 t              p                
------------------------------------------------------------------------
Herb         vs. Mowed                4.2024        0.030600
Herb         vs. Unman                2.8283        0.027600
Mowed        vs. Unman                1.6609        0.115600
------------------------------------------------------------------------

It seems odd to me that a higher t-value would produce a higher p-value (4.2024; 0.0306) than does a lower t-value (2.8283; 0.0276).

Thanks Again!
Ken

Kenneth McCravy

unread,
Dec 21, 2024, 10:42:35 AM12/21/24
to PC-ORD
Thanks again Bruce. Yeah, I wasn't involved in the original study setup, but I've seen the site. It's a pretty standard peach orchard, very uniform. So I'm guessing that's why there wasn't much variation in the blocks. Seems like a completely randomized design would have been fine.

Ken

Kenneth McCravy

unread,
Dec 22, 2024, 12:03:53 PM12/22/24
to PC-ORD
Thanks again Bruce. That makes sense re the permutation p values. You're welcome for catching the error in the PerMANOVA output.

Happy Holidays!
Ken
Reply all
Reply to author
Forward
0 new messages