Hi I know this is a very old post but because I am working on this right now myself I thought I write down my thoughts:
The taskbatch.sh scripts, relying on FSL, don't really work when you have empty EV files. Say if you have a memory experiment, and only some items were remembered from a certain condition and this varies across subjects you may end up getting an empty output file or an error message.
You can of course exclude runs or subjects where an EV file for a condition would be empty but that way you may end up losing a lot of data and sometimes that may not be a viable option.
That's why you may want to use some other software, for example SPM, which can handle missing onsets for a condition and which also comes with the advantage of a nice GUI and easy
inspection of results, and perhaps even the utilities of other toolboxes (not
sure).
When using SPM you have two options:
B) You can also convert your ciftis into fake-nifti
data (as was done here:
https://doi.org/10.1002/hbm.25734
), and submit those to SPM. Then convert the fakenifti con_maps of the first-level back to surface and volume files (LH, RH, subcortex) and then perform second level stats with PALM on them. It's a little awkward but it works. PALM takes ages to run though.
Regardless of whether you pick SPM or PALM,
however, they both have a flaw in that correction for multiple testing becomes awkard as you need to separate your data into surface and volume and calculate stats independently for both.
Regarding SPM's FWE correction: random field theory would have to be applied separately to the cortical and subcortical maps, resulting in two separate corrections / outputs. If you later combine these data, the combined map wouldn't really have a unified, equivalent correction for multiple comparisons across surface and volume, which might lead to incorrect inferences. I am not sure, though, how to obtain FWE correction for a volume-surface merged version. Any ideas how to write a workaround in SPM? Isn't there a cifti toolbox for SPM now?
When using PALM and you combine, say LH.gii, RH.gii and subcortex.nii, you should correct the significance threshold: Either via Bonferroni,
i.e., -log10(alpha/N) = -log10(0.05/3) = 1.7782, or with Šidák, i.e.,
-log10(1-(1-alpha)1/N) = -log10(1-(1-0.05)1/3) = 1.7708. You basically divide by three (N) because you have three separate tests of tests: LH, RH, and volume. But I am not sure whether this really accurately corrects the p-values, because the different brain parts have different numbers of vertices/voxels: there are different numbers of vertices in the hemispheres and there is a much smaller number of voxels in the volumetric part. And would you have to divide by two instead of three if you were to submit one giant CORTEX surface .gii plus subcortex .nii?