--
You received this message because you are subscribed to the Google Groups "WagerlabTools" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wagerlabtool...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
On Feb 22, 2017, at 3:02 AM, tieman...@gmail.com wrote:
<Taylor_BehavRes_12_mediation_perm.pdf><Winkler_NI_14.pdf><Winkler_NI_15_MultilevelBlockPermutation.pdf>
It sounds like permuting the baseline values relative to the post-stimulus values might violate exchangeability by not preserving the natural correlations in your dataset. But I may be misunderstanding what you’re doing.
A simple way to correct for multiple comparisons is to use the bootstrapped p-values to calculate an FDR threshold.
Tor
To unsubscribe from this group and stop receiving emails from it, send an email to wagerlabtools+unsubscribe@googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "WagerlabTools" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wagerlabtools+unsubscribe@googlegroups.com.
<Taylor_BehavRes_12_mediation_perm.pdf><Winkler_NI_14.pdf><Winkler_NI_15_MultilevelBlockPermutation.pdf>
The 2nd-level bootstrap actually selects subjects with replacement, not time points, in case that wasn’t clear. The weights are based on within-person precision, which does not change, so they are not re-calculated. We do not change the relative weights so that they sum to 1. This practice might increase the variance in the weighted sum (group-level coefficients), reducing power. It’s possible we could do it the other way, but we’d have to evaluate it carefully with true/false positive simulations, etc. The variation in the weighted sum is what is used to construct the bootstrap distribution, so the more variation, the less power.