SPM FWE, Monte carlo and SnPM

129 views
Skip to first unread message

Alex spm

unread,
Apr 26, 2021, 4:11:25 AM4/26/21
to Statistical Nonparametric Mapping
Dear all, 

I analyzed fMRI data using SPM and a classical FWE threshold.
However, I recently read papers underlying that FWE control is not the gold standard we though and the we should now consider alternatives stats/errors control.
I see two possibilities: Monte Carlo in order to set the appropriate cluster size from estimated FWHM (as AFNI but I now consider that cluster_threshold_beta is more suitable for SPM data) or non-parametric stats. 

I'm working on a one-sample t test (N=50). I set a SnPm one sample t test and a voxel-wise inference of .05 FWE and it gives me quite similar results to a standard SPM and a cluster size set from a Monte Carlo simulation. 
However, it's not clear for me which approach is more appropriated. 
Did the number of subjects, the model (i.e. one sample) a key parameter that should help to select the best stat approach? 

Best, 

Alex

Thomas Nichols

unread,
Apr 26, 2021, 5:54:32 AM4/26/21
to Alex spm, Statistical Nonparametric Mapping
Dear Alex,

I analyzed fMRI data using SPM and a classical FWE threshold.
However, I recently read papers underlying that FWE control is not the gold standard we though and the we should now consider alternatives stats/errors control.
I see two possibilities: Monte Carlo in order to set the appropriate cluster size from estimated FWHM (as AFNI but I now consider that cluster_threshold_beta is more suitable for SPM data) or non-parametric stats. 

I'm not sure what references you're alluding to.  You need to understand that "FWE control" is a fundamental metric of false positive risk... it is generic to any method that is used to obtain results that should offer such "control".  For example, there are random field theory, Monte Carlo and permutation methods that offer FWE control with different types of assumptions.  When data are sufficiently smooth and there are sufficient DF (e.g. at least 20), I find that the RFT and MC methods basically give similar answers for peak height and cluster size.  The only advantage of the MC methods, to me, is the newly flexible methods developed by AFNI that account for both non-Gaussian spatial autocorrelation and non-stationary spatial autocorrelation.  But the safest methods, IMHO, are permutation based, which make the weakest assumptions. (I.e. in Monte Carlo you have to make assumptions about the distribution of the data and the form of the spatial autocorrelation, which usually are the same assumptions as made in RFT).
 
I'm working on a one-sample t test (N=50). I set a SnPm one sample t test and a voxel-wise inference of .05 FWE and it gives me quite similar results to a standard SPM and a cluster size set from a Monte Carlo simulation. 

With a decent sample size, I'm not surprised that SnPM agrees with MC (or even RFT).
 
However, it's not clear for me which approach is more appropriated. 
Did the number of subjects, the model (i.e. one sample) a key parameter that should help to select the best stat approach? 

The simplest way to view this is in terms of assumptions:  Permutation makes the weakest assumptions, and as long as those assumptions are reasonable, it should be regarded as a 'gold standard' approach, to which other methods can be compared to.  For the 1-sample t-test, the assumptions are that the distribution of the errors are independent and symmetric and centered about zero.  This is weaker than the usual Gaussian error assumptions and of course also involves no spatial autocorrelation assumptions.

Does this help?

-Tom

Alex spm

unread,
Apr 26, 2021, 7:36:31 AM4/26/21
to Thomas Nichols, Statistical Nonparametric Mapping
Dear Tom, 

Le lun. 26 avr. 2021 à 11:54, Thomas Nichols <thomas....@bdi.ox.ac.uk> a écrit :
Dear Alex,

I analyzed fMRI data using SPM and a classical FWE threshold.
However, I recently read papers underlying that FWE control is not the gold standard we though and the we should now consider alternatives stats/errors control.
I see two possibilities: Monte Carlo in order to set the appropriate cluster size from estimated FWHM (as AFNI but I now consider that cluster_threshold_beta is more suitable for SPM data) or non-parametric stats. 

I'm not sure what references you're alluding to.  You need to understand that "FWE control" is a fundamental metric of false positive risk... it is generic to any method that is used to obtain results that should offer such "control".  For example, there are random field theory, Monte Carlo and permutation methods that offer FWE control with different types of assumptions. 
You're right, I was not clear enough. FWE was for the correction applied in SPM under the "p value adjustment to control" option. 
I'm just starting to get familiar with non-SPM FWE, Monte-Carlo and non parametric analyses in fMRI, not sure using the associated terms correctly.
When data are sufficiently smooth and there are sufficient DF (e.g. at least 20), I find that the RFT and MC methods basically give similar answers for peak height and cluster size.  The only advantage of the MC methods, to me, is the newly flexible methods developed by AFNI that account for both non-Gaussian spatial autocorrelation and non-stationary spatial autocorrelation. 
The last time I checked AFNI, I read that it did not offer strong false positive control. If I remember correctly, AFNI gave me a quite surprisingly small cluster size (nearly 49 voxels). It sounded very liberal compared to a k=20, p<0.05 FWE correction in SPM (Cluster_threshold_beta was more strict). Not a strong experimental argument, though. Just an feeling. 
But the safest methods, IMHO, are permutation based, which make the weakest assumptions. (I.e. in Monte Carlo you have to make assumptions about the distribution of the data and the form of the spatial autocorrelation, which usually are the same assumptions as made in RFT). 
 
I'm working on a one-sample t test (N=50). I set a SnPm one sample t test and a voxel-wise inference of .05 FWE and it gives me quite similar results to a standard SPM and a cluster size set from a Monte Carlo simulation. 

With a decent sample size, I'm not surprised that SnPM agrees with MC (or even RFT).
So, the N is of importance (as always in fMRI data) and under a sufficient number of observations, all three methods tend to give similar results?
 
However, it's not clear for me which approach is more appropriated. 
Did the number of subjects, the model (i.e. one sample) a key parameter that should help to select the best stat approach? 

The simplest way to view this is in terms of assumptions:  Permutation makes the weakest assumptions, and as long as those assumptions are reasonable, it should be regarded as a 'gold standard' approach, to which other methods can be compared to.  For the 1-sample t-test, the assumptions are that the distribution of the errors are independent and symmetric and centered about zero.  This is weaker than the usual Gaussian error assumptions and of course also involves no spatial autocorrelation assumptions.
So, the less stringent the assumptions, the less likely we are to produce bias in our results? I wonder why SnPM (and non-parametric statistics) are not more used in fMRI papers. 

Does this help?
I'm a beginner in MC & permutations in fMRI analyses, so yes, it helps me a lot. 
Thank you!

Alex

-Tom

Thomas Nichols

unread,
Apr 26, 2021, 12:49:05 PM4/26/21
to Alex spm, Statistical Nonparametric Mapping
Dear Alex,

I analyzed fMRI data using SPM and a classical FWE threshold.
However, I recently read papers underlying that FWE control is not the gold standard we though and the we should now consider alternatives stats/errors control.
I see two possibilities: Monte Carlo in order to set the appropriate cluster size from estimated FWHM (as AFNI but I now consider that cluster_threshold_beta is more suitable for SPM data) or non-parametric stats. 

I'm not sure what references you're alluding to.  You need to understand that "FWE control" is a fundamental metric of false positive risk... it is generic to any method that is used to obtain results that should offer such "control".  For example, there are random field theory, Monte Carlo and permutation methods that offer FWE control with different types of assumptions. 
You're right, I was not clear enough. FWE was for the correction applied in SPM under the "p value adjustment to control" option. 
I'm just starting to get familiar with non-SPM FWE, Monte-Carlo and non parametric analyses in fMRI, not sure using the associated terms correctly.

All methods are attempting to provide FWE control; if you're doing cluster-wise inference this at the cluster level.
 
When data are sufficiently smooth and there are sufficient DF (e.g. at least 20), I find that the RFT and MC methods basically give similar answers for peak height and cluster size.  The only advantage of the MC methods, to me, is the newly flexible methods developed by AFNI that account for both non-Gaussian spatial autocorrelation and non-stationary spatial autocorrelation. 
The last time I checked AFNI, I read that it did not offer strong false positive control. If I remember correctly, AFNI gave me a quite surprisingly small cluster size (nearly 49 voxels). It sounded very liberal compared to a k=20, p<0.05 FWE correction in SPM (Cluster_threshold_beta was more strict). Not a strong experimental argument, though. Just an feeling. 

Well, I've not done it myself but it could be that the FWHM is too small.. did you estimate it from the data?  I do know that AFNI uses 1st level FWHM estimates and not group level... so that might be a contribution to setting FWHM too small and thus giving cluster threshold too small.

 
But the safest methods, IMHO, are permutation based, which make the weakest assumptions. (I.e. in Monte Carlo you have to make assumptions about the distribution of the data and the form of the spatial autocorrelation, which usually are the same assumptions as made in RFT). 
 
I'm working on a one-sample t test (N=50). I set a SnPm one sample t test and a voxel-wise inference of .05 FWE and it gives me quite similar results to a standard SPM and a cluster size set from a Monte Carlo simulation. 

With a decent sample size, I'm not surprised that SnPM agrees with MC (or even RFT).
So, the N is of importance (as always in fMRI data) and under a sufficient number of observations, all three methods tend to give similar results?

That's been my experience, yes.  And N of 50 or more is where you can start to count on that convergence.
 
 However, it's not clear for me which approach is more appropriated. 
Did the number of subjects, the model (i.e. one sample) a key parameter that should help to select the best stat approach? 

The simplest way to view this is in terms of assumptions:  Permutation makes the weakest assumptions, and as long as those assumptions are reasonable, it should be regarded as a 'gold standard' approach, to which other methods can be compared to.  For the 1-sample t-test, the assumptions are that the distribution of the errors are independent and symmetric and centered about zero.  This is weaker than the usual Gaussian error assumptions and of course also involves no spatial autocorrelation assumptions.
So, the less stringent the assumptions, the less likely we are to produce bias in our results? I wonder why SnPM (and non-parametric statistics) are not more used in fMRI papers. 

I wonder that sometimes too :)

Does this help?
I'm a beginner in MC & permutations in fMRI analyses, so yes, it helps me a lot. 

No problem!

-Tom 

Alex spm

unread,
Apr 26, 2021, 2:28:16 PM4/26/21
to Thomas Nichols, Statistical Nonparametric Mapping
Dear Tom, 

Le lun. 26 avr. 2021 à 18:49, Thomas Nichols <thomas....@bdi.ox.ac.uk> a écrit :
Dear Alex,

I analyzed fMRI data using SPM and a classical FWE threshold.
However, I recently read papers underlying that FWE control is not the gold standard we though and the we should now consider alternatives stats/errors control.
I see two possibilities: Monte Carlo in order to set the appropriate cluster size from estimated FWHM (as AFNI but I now consider that cluster_threshold_beta is more suitable for SPM data) or non-parametric stats. 

I'm not sure what references you're alluding to.  You need to understand that "FWE control" is a fundamental metric of false positive risk... it is generic to any method that is used to obtain results that should offer such "control".  For example, there are random field theory, Monte Carlo and permutation methods that offer FWE control with different types of assumptions. 
You're right, I was not clear enough. FWE was for the correction applied in SPM under the "p value adjustment to control" option. 
I'm just starting to get familiar with non-SPM FWE, Monte-Carlo and non parametric analyses in fMRI, not sure using the associated terms correctly.

All methods are attempting to provide FWE control; if you're doing cluster-wise inference this at the cluster level.
 
When data are sufficiently smooth and there are sufficient DF (e.g. at least 20), I find that the RFT and MC methods basically give similar answers for peak height and cluster size.  The only advantage of the MC methods, to me, is the newly flexible methods developed by AFNI that account for both non-Gaussian spatial autocorrelation and non-stationary spatial autocorrelation. 
The last time I checked AFNI, I read that it did not offer strong false positive control. If I remember correctly, AFNI gave me a quite surprisingly small cluster size (nearly 49 voxels). It sounded very liberal compared to a k=20, p<0.05 FWE correction in SPM (Cluster_threshold_beta was more strict). Not a strong experimental argument, though. Just an feeling. 

Well, I've not done it myself but it could be that the FWHM is too small.. did you estimate it from the data?  I do know that AFNI uses 1st level FWHM estimates and not group level... so that might be a contribution to setting FWHM too small and thus giving cluster threshold too small.
I used a concatenation of individual Res files from the 2nd level estimation in SPM (I don't save the residuals in 1st level). Indeed, the FWHM was very small compared to cluster_threshold_beta. 

 
But the safest methods, IMHO, are permutation based, which make the weakest assumptions. (I.e. in Monte Carlo you have to make assumptions about the distribution of the data and the form of the spatial autocorrelation, which usually are the same assumptions as made in RFT). 
 
I'm working on a one-sample t test (N=50). I set a SnPm one sample t test and a voxel-wise inference of .05 FWE and it gives me quite similar results to a standard SPM and a cluster size set from a Monte Carlo simulation. 

With a decent sample size, I'm not surprised that SnPM agrees with MC (or even RFT).
So, the N is of importance (as always in fMRI data) and under a sufficient number of observations, all three methods tend to give similar results?

That's been my experience, yes.  And N of 50 or more is where you can start to count on that convergence.
 
 However, it's not clear for me which approach is more appropriated. 
Did the number of subjects, the model (i.e. one sample) a key parameter that should help to select the best stat approach? 

The simplest way to view this is in terms of assumptions:  Permutation makes the weakest assumptions, and as long as those assumptions are reasonable, it should be regarded as a 'gold standard' approach, to which other methods can be compared to.  For the 1-sample t-test, the assumptions are that the distribution of the errors are independent and symmetric and centered about zero.  This is weaker than the usual Gaussian error assumptions and of course also involves no spatial autocorrelation assumptions.
So, the less stringent the assumptions, the less likely we are to produce bias in our results? I wonder why SnPM (and non-parametric statistics) are not more used in fMRI papers. 

I wonder that sometimes too :)
From my point of view, non-parametric statistics have long suffered from an image of "low-level statistics" (at least in psychology courses).
It is sometimes difficult for psychological fMRIsts like me to change their minds (and the co-authors ones!) I hope to publish with this type of analysis some day!

Does this help?
I'm a beginner in MC & permutations in fMRI analyses, so yes, it helps me a lot. 

No problem!

-Tom 
Thanks!

Alex 
Reply all
Reply to author
Forward
0 new messages