Dear experts,
I have analysed a set of 31 difference images (the difference calculated between two conditions) using a one sample t-test. I have implemented the t-test both within SnPM and using FSLs randomise option with “-c”.
The options I specify in SnPM are shown in the attached screenshot.
I have fed in the 31 different images as input, and I subsequently analyse the output using the following options in the “Inference” batch.
Meanwhile, in FSL, I feed in precisely the same difference images as a single 4D nii file. I run randomise with the following options:
randomise -m mask -i data -o t_test_results -v 8 -c 3.1 -R -n 5000 -N -1
where “mask” is a binary mask to exclude voxels outside the brain, and “data” is my 4D nii file containing the same 31 images that I feed into SnPM.
I subsequently run an instance of FSLs “cluster” on the to extract the clusters found by randomise, where I define the threshold to be “ -t 0.95" in line with the p < 0.05 SnPM requirement above. I pass the “clustere_corrp_tstat1” output file
from randomise to the cluster function.
I find a single significant cluster in the data (the same one is found by both SnPM and randomise), but the number of voxels in the cluster and the cluster p-values reported by both method are somewhat different.
In SnPM : Cluster size = 2579 voxels , FWE corrected p = 0.0370
In FSL : Cluster size = 2722 voxels , FWE corrected p = 0.0300
I would like to try and understand the source of the difference, and whether it is due to something I have done wrong using either method. I think the number of permutations performed is sufficiently large that this is not due to an under-defined
null distribution. Any help would be very much appreciated!
Kind regards,
Donal