Bootstrapping/batch results

172 views
Skip to first unread message

Haris Styliadis

unread,
Jun 14, 2019, 5:37:23 AM6/14/19
to SwE-Toolbox Support
Dear Dr. Nichols and SwE experts.

I have used SwE to get results for a 2X2X2 design for EEG data. All my results are at p<0.001 uncorrected, but I have analyzed time segments that have been previously found to be significant via a time-point by time-point TANOVA corrected for multiple testing over time via RAGU (http://www.thomaskoenig.ch/index.php/work/ragu).

I wanted to check whether my results can be cluster-corrected.

Previously, for other data, I have performed the statistics via SPM's flexible design and obtained cluster correction via REST (http://www.restfmri.net/forum/). Their newer version seems to have overcome the problem with inflated false-positive rates as this is discussed in Eklund, A., Nichols, T. E., & Knutsson, H. (2016). Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates. Proceedings of the national academy of sciences, 113(28), 7900-7905.

I read in another thread that we should not use pTFCE as this relies on results from the Random Field Theory, which has not been validated for the SwE method. I have used this with SnPM results but was not happy with the outcome

For the current data analyzed with SwE, I have tried bootstrapping with both TFCE and voxelwise options. I performed the tests with the default settings and I tried it only for the contrast which gave me the lowest cluster for p<0.001 uncorrected. Attached are the SwE results for uncorrected p without bootstrapping, bootstrapping-voxelwise and bootstrapping-TFCE. 


To sum up, my questions are the following,
  1. Regarding SwE's bootstrapping options, I get a more reliable p-value for the voxelwise option only for 1 voxel. But I also get a corrected FWE value. I thought that this method does not provide corrected P values. For SwE's TFCE option I get a corrected FWE p-value for 4 voxels but I also get a much more extended cluster of 5726 voxels that was only 53 voxels for the estimation without bootstrapping. So which is corrected? And which could be considered publishable? 
  2. My second question is about SnPM. Can I use REST or pTFCE for results obtained with SnPM?
  3. Finally, is there a way to batch the results section? I several significant time segments where I have to estimate a great number of F and T-tests. Is there a way to define the contrasts along with their names, and test type in the job file? This will be a time-saver. I saw that there is a field xCon in the SwE struct where these are stored after we define them in SwE's contrast's manager. 

Thank you for your time and support

With kind regards

Charis
arousal_uncorrected.PNG
arousal_boot_voxelwise.PNG
arousal_boot_tfce.PNG

Bryan GUILLAUME

unread,
Jun 17, 2019, 7:43:44 AM6/17/19
to swe-t...@googlegroups.com
Dear Charis,

Please see my answers below. 

Hope this helps,
Bryan

Begin forwarded message:

From: Haris Styliadis <hstyl...@gmail.com>
Subject: [SwE] Bootstrapping/batch results
Date: 14 June 2019 at 11:37:23 CEST
To: SwE-Toolbox Support <swe-t...@googlegroups.com>

Dear Dr. Nichols and SwE experts.

I have used SwE to get results for a 2X2X2 design for EEG data. All my results are at p<0.001 uncorrected, but I have analyzed time segments that have been previously found to be significant via a time-point by time-point TANOVA corrected for multiple testing over time via RAGU (http://www.thomaskoenig.ch/index.php/work/ragu).

I wanted to check whether my results can be cluster-corrected.

Yes, they can using a cluster-wise Wild Bootstrap (see further explanation about this below)


Previously, for other data, I have performed the statistics via SPM's flexible design and obtained cluster correction via REST (http://www.restfmri.net/forum/). Their newer version seems to have overcome the problem with inflated false-positive rates as this is discussed in Eklund, A., Nichols, T. E., & Knutsson, H. (2016). Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates. Proceedings of the national academy of sciences, 113(28), 7900-7905.

I read in another thread that we should not use pTFCE as this relies on results from the Random Field Theory, which has not been validated for the SwE method. I have used this with SnPM results but was not happy with the outcome

That is correct. pTFCE should not be used with results from SwE.


For the current data analyzed with SwE, I have tried bootstrapping with both TFCE and voxelwise options. I performed the tests with the default settings and I tried it only for the contrast which gave me the lowest cluster for p<0.001 uncorrected. Attached are the SwE results for uncorrected p without bootstrapping, bootstrapping-voxelwise and bootstrapping-TFCE. 

At first glance, the results seem plausible. Using the Wild bootstrap option, you can control for the FWER, which is an advantage compared to the parametric version, for which you can only use FDR as correction. As using a FWER p-value threshold of 5% can be more stringent than an uncorrected p-value threshold of 0.1%, you can indeed have less voxels surviving the thresholding as observed in your case. For the TFCE results, please see my reply below.



To sum up, my questions are the following,
  1. Regarding SwE's bootstrapping options, I get a more reliable p-value for the voxelwise option only for 1 voxel. But I also get a corrected FWE value. I thought that this method does not provide corrected P values. For SwE's TFCE option I get a corrected FWE p-value for 4 voxels but I also get a much more extended cluster of 5726 voxels that was only 53 voxels for the estimation without bootstrapping. So which is corrected? And which could be considered publishable? 
The Wild Bootstrap (WB) can indeed control for the FWER by looking at the distribution of the max statistic like it is done in SnPM. Thus it can give you FWER-corrected p-values. Note that you have used a voxel-wise WB, thus the p-values are voxel-wise FWER-corrected p-value. If you want cluster-wise FWER p-values, you would need to specify it in the “Specify model” batch module —> “Select Inference type” field —> “Clusterwise”. TFCE combines voxelwise and clusterwise information and thus may discover an extended cluster of voxels that a voxel-wise analysis would miss. In your case, it is likely that the cluster you found contains a very extended signal, but which appear relatively weak at each of its voxels. If you lower your threshold FWER p-values (e.g., to 0.2) for the voxel-wise WB, you may start to see voxels of this cluster appearing in your results. 

  1. My second question is about SnPM. Can I use REST or pTFCE for results obtained with SnPM?
I do not know much about REST and pTFCE. Thus, maybe, Tom Nichols may reply better to this question or maybe you could send this specific message to the SnPM mailing list (snpm-s...@googlegroups.com). The only thing I can tell now is that with SnPM, you would not need to use pTFCE as the TFCE p-values would be computed using permutations and not Random Field Theory like in pTFCE. 

  1. Finally, is there a way to batch the results section? I several significant time segments where I have to estimate a great number of F and T-tests. Is there a way to define the contrasts along with their names, and test type in the job file? This will be a time-saver. I saw that there is a field xCon in the SwE struct where these are stored after we define them in SwE's contrast's manager. 
It should be possible. However, as I have never done this before, please let me some time to experiment with this and I will get back to you soon with a procedure to do so. Please also note that the field "xCon" was not saved in SwE.mat when a Wild bootstrap analysis was done (it was only saved for parametric analysis). The new version (https://github.com/NISOx-BDI/SwE-toolbox/releases/tag/v2.1.1) that will be announced soon should fix this issue.


Thank you for your time and support

With kind regards

Charis

-- 
You received this message because you are subscribed to the Google Groups "SwE-Toolbox Support" group.
To unsubscribe from this group and stop receiving emails from it, send an email to swe-toolbox...@googlegroups.com.
To post to this group, send email to swe-t...@googlegroups.com.
Visit this group at https://groups.google.com/group/swe-toolbox.
To view this discussion on the web visit https://groups.google.com/d/msgid/swe-toolbox/b3857e93-b7bb-4b34-a824-c066da201c86%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Haris Styliadis

unread,
Jun 17, 2019, 2:56:04 PM6/17/19
to SwE-Toolbox Support
Dear Bryan, thank you for your answer. I will try WB with the new version as well.

Best,

Charis
Bryan
To unsubscribe from this group and stop receiving emails from it, send an email to swe-t...@googlegroups.com.

philip Joadavi

unread,
Dec 20, 2021, 6:08:04 PM12/20/21
to SwE-Toolbox Support
Dear Bryan and SwE experts,

I'm writing regarding this question that was posted in 2019. I would like to run many F and T statistics, but since it takes time for each run, I would like to run them on the cluster computers at the university all at once. Could you please let me know if there is any procedure to batch the result section?

Thanks a lot!
Philip
 
  1. Finally, is there a way to batch the results section? I several significant time segments where I have to estimate a great number of F and T-tests. Is there a way to define the contrasts along with their names, and test type in the job file? This will be a time-saver. 
It should be possible. However, as I have never done this before, please let me some time to experiment with this and I will get back to you soon with a procedure to do so. 

Grant Tays

unread,
Feb 2, 2022, 8:44:24 AM2/2/22
to SwE-Toolbox Support
Just following up with this to see if there has been an update.  Not associated with the original post, but being able to batch the results would amazing.
Reply all
Reply to author
Forward
0 new messages