Re: Voxel-wise vs. Cluster-wise

206 views
Skip to first unread message

Thomas Nichols

unread,
Jun 5, 2019, 11:08:53 AM6/5/19
to Chuanji Gao, Statistical Nonparametric Mapping
Dear Chuanji,

I have run whole brain searchlight decoding analyses and got a accuracy map for each individual.
I ran permutation testing of the 2nd (group) level analyses on all the individual accuracy maps.

OK. Just for the record, accuracy data should be perfectly valid for testing with a two-sample t-test or correlation with one variable. But if there are nuisance variables there can be heterogenous variance that makes it only approximately exact. 

I have tried voxel-wise permutation and cluster-based permutation (cluster forming threshold .001, cluster-wise .05 or .001) without smoothed variance.

Ok, though just to be clear I would describe the latter threshold as “familywise error” (FWE) and it is unusual to use a 0.001 FWE significance level as FWE is usually not so powerful. 

I have two somewhat conceptual questions:

1) I'm not sure if the parameters I used for the cluster-based permutation is appropriate: cluster forming threshold .001, cluster-wise .001?

Again, I would use FWE 5% and not 0.1%=0.001. 


When I try cluster-wise threshold .05, the critical voxel size is only 2. When I try .001, it's 10. And the critical size is 16 when I use random field theory with the same cluster forming threshold.

Presumably the RFT threshold is at the 5% level?  These really aren’t compatible. 

Without seeing the data it is hard to know whether RFT is sensible for this data, but permutation should be valid. 


2) For the voxel-wise permutation, I got some clusters with big or small sizes, the small ones can include only 2 voxels.
Is it appropriate to include some arbitrary cluster extent threshold e.g., 10 in this case, or it is simply better to use the cluster-based permutation instead?

If you’re doing voxelwise inference there is no need to do any subsequent clustering thresholding. However, as any further thresholding can only reduce the false positive rate, it is a valid/safe thing to do. 

Hope this helps!

-Tom


Any help would be appreciated.

Chuanji


--
You received this message because you are subscribed to the Google Groups "Statistical Nonparametric Mapping" group.
To unsubscribe from this group and stop receiving emails from it, send an email to snpm-support...@googlegroups.com.
To post to this group, send email to snpm-s...@googlegroups.com.
Visit this group at https://groups.google.com/group/snpm-support.
To view this discussion on the web, visit https://groups.google.com/d/msgid/snpm-support/dc6d138a-3ecb-4436-819b-fb838d6da76f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
--
_________________________________________________________
Thomas Nichols, PhD
Professor of Neuroimaging Statistics
Nuffield Department of Population Health | University of Oxford
Big Data Institute | Li Ka Shing Centre for Health Information and Discovery
Old Road Campus | Headington | Oxford | OX3 7LF | United Kingdom
T: +44 1865 743590 | E: thomas....@bdi.ox.ac.uk
W: http://nisox.org | http://www.bdi.ox.ac.uk


rl...@bu.edu

unread,
Jun 24, 2019, 11:48:40 PM6/24/19
to Statistical Nonparametric Mapping
Hello SnPM Team,

I'm following this email thread as I'm also trying to apply non-parametric test for the group-level searchlight analysis on individual's accuracy_minus_chance image. I'm very new to this toolbox so I'm wondering which result image (beta_0001 vs. snpmT+) should I refer to in order to look at which regions indicate group-level significant classification result against the chance level? The beta_0001.img generates very similar result to SPM though...Thank you!

Ran

On Wednesday, June 5, 2019 at 11:08:53 AM UTC-4, Thomas Nichols wrote:
> Dear Chuanji,
>
>
>
>
>
>
> I have run whole brain searchlight decoding analyses and got a accuracy map for each individual.
>
> I ran permutation testing of the 2nd (group) level analyses on all the individual accuracy maps.
>
>
>
> OK. Just for the record, accuracy data should be perfectly valid for testing with a two-sample t-test or correlation with one variable. But if there are nuisance variables there can be heterogenous variance that makes it only approximately exact. 
>
>
>
>
>
>
>
> I have tried voxel-wise permutation and cluster-based permutation (cluster forming threshold .001, cluster-wise .05 or .001) without smoothed variance.
>
>
> Ok, though just to be clear I would describe the latter threshold as “familywise error” (FWE) and it is unusual to use a 0.001 FWE significance level as FWE is usually not so powerful. 
>
>
>
> I have two somewhat conceptual questions:
>
>
>
> 1) I'm not sure if the parameters I used for the cluster-based permutation is appropriate: cluster forming threshold .001, cluster-wise .001?
>
>
> Again, I would use FWE 5% and not 0.1%=0.001. 
>
>
>
>
>
> When I try cluster-wise threshold .05, the critical voxel size is only 2. When I try .001, it's 10. And the critical size is 16 when I use random field theory with the same cluster forming threshold.
>
>
> Presumably the RFT threshold is at the 5% level?  These really aren’t compatible. 
>
>
> Without seeing the data it is hard to know whether RFT is sensible for this data, but permutation should be valid. 
>
>
>
>
>
>
> 2) For the voxel-wise permutation, I got some clusters with big or small sizes, the small ones can include only 2 voxels.
>
> Is it appropriate to include some arbitrary cluster extent threshold e.g., 10 in this case, or it is simply better to use the cluster-based permutation instead?
>
>
> If you’re doing voxelwise inference there is no need to do any subsequent clustering thresholding. However, as any further thresholding can only reduce the false positive rate, it is a valid/safe thing to do. 
>
>
> Hope this helps!
>
>
> -Tom
>
>
>
>
>
>
> Any help would be appreciated.
>
>
>
> Chuanji
>
>
>
>
>
>
>
>
>
> --
>
> You received this message because you are subscribed to the Google Groups "Statistical Nonparametric Mapping" group.
>
> To unsubscribe from this group and stop receiving emails from it, send an email to snpm-s...@googlegroups.com.
>
> To post to this group, send email to snpm-s...@googlegroups.com.
>
> Visit this group at https://groups.google.com/group/snpm-support.
>
> To view this discussion on the web, visit https://groups.google.com/d/msgid/snpm-support/dc6d138a-3ecb-4436-819b-fb838d6da76f%40googlegroups.com.
>
> For more options, visit https://groups.google.com/d/optout.
>
>
> --
>
> _________________________________________________________
> Thomas Nichols, PhD
> Professor of Neuroimaging Statistics
> Nuffield Department of Population Health | University of Oxford
> Big Data Institute | Li Ka Shing Centre for Health Information and Discovery
> Old Road Campus | Headington | Oxford | OX3 7LF | United Kingdom
> T: +44 1865 743590 | E: thomas...@bdi.ox.ac.uk

Thomas Nichols

unread,
Jun 25, 2019, 10:37:53 AM6/25/19
to rl...@bu.edu, Statistical Nonparametric Mapping
Dear Ran,

I'm following this email thread as I'm also trying to apply non-parametric test for the group-level searchlight analysis on individual's accuracy_minus_chance image. I'm very new to this toolbox so I'm wondering which result image (beta_0001 vs. snpmT+) should I refer to in order to look at which regions indicate group-level significant classification result against the chance level? The beta_0001.img generates very similar result to SPM though...Thank you!

The snpmT+ give the t-statistic map, but you should use the Results facility to identify which results are significant while controlling for FWE (voxelwise or clusterwise) or FDR (voxelwise).

-Tom
 

Ran

On Wednesday, June 5, 2019 at 11:08:53 AM UTC-4, Thomas Nichols wrote:
> Dear Chuanji,
>
>
>
>
>
>
> I have run whole brain searchlight decoding analyses and got a accuracy map for each individual.
>
> I ran permutation testing of the 2nd (group) level analyses on all the individual accuracy maps.
>
>
>
> OK. Just for the record, accuracy data should be perfectly valid for testing with a two-sample t-test or correlation with one variable. But if there are nuisance variables there can be heterogenous variance that makes it only approximately exact. 
>
>
>
>
>
>
>
> I have tried voxel-wise permutation and cluster-based permutation (cluster forming threshold .001, cluster-wise .05 or .001) without smoothed variance.
>
>
> Ok, though just to be clear I would describe the latter threshold as “familywise error” (FWE) and it is unusual to use a 0.001 FWE significance level as FWE is usually not so powerful. 
>
>
>
> I have two somewhat conceptual questions:
>
>
>
> 1) I'm not sure if the parameters I used for the cluster-based permutation is appropriate: cluster forming threshold .001, cluster-wise .001?
>
>
> Again, I would use FWE 5% and not 0.1%=0.001. 
>
>
>
>
>
> When I try cluster-wise threshold .05, the critical voxel size is only 2. When I try .001, it's 10. And the critical size is 16 when I use random field theory with the same cluster forming threshold.
>
>
> Presumably the RFT threshold is at the 5% level?  These really aren’t compatible. 
>
>
> Without seeing the data it is hard to know whether RFT is sensible for this data, but permutation should be valid. 
>
>
>

>
>
> 2) For the voxel-wise permutation, I got some clusters with big or small sizes, the small ones can include only 2 voxels.
>
> Is it appropriate to include some arbitrary cluster extent threshold e.g., 10 in this case, or it is simply better to use the cluster-based permutation instead?
>
>
> If you’re doing voxelwise inference there is no need to do any subsequent clustering thresholding. However, as any further thresholding can only reduce the false positive rate, it is a valid/safe thing to do. 
>
>
> Hope this helps!
>
>
> -Tom
>
>
>
>
>
>
> Any help would be appreciated.
>
>
>
> Chuanji
>
__________________________________________________________
Thomas Nichols, PhD
Professor of Neuroimaging Statistics
Nuffield Department of Population Health | University of Oxford
Big Data Institute | Li Ka Shing Centre for Health Information and Discovery
Old Road Campus | Headington | Oxford | OX3 7LF | United Kingdom
Reply all
Reply to author
Forward
0 new messages