> On 3 Jul 2023, at 11:36, Lukas Van Oudenhove <
lukas.vano...@gmail.com> wrote:
>
> I agree with your interpretation that the signal is not very strong due to high variability across participants, which is interesting on its own.
>
> There are some negative z-values surviving the < -1.96 corrected threshold, and i explored the uncorrected group maps using cosmo_stat (thanks for your suggestion).
> I have a question related to this though: does this differ from cosmo_monte_carlo_cluster_stats in two ways being 1) non-parametric and 2) TFCE corrected?
> In the output of the latter, z-values range from -2.23 to 0 (no single positive value, but spanning the entire negative range), while in the output of the former, the range of z-values is -4.8587 to 2.5530 (which fits with stronger negative than positive effects).
Indeed, when using cosmo_stats the z-scores are not corrected for multiple comparisons, so the range of z-scores is expected to be wider.
>
> Finally, I read in the helpful documentation of cosmo_monte_carlo_cluster_stats that inputting null values has more power, hence I would like to implement this (using cosmo_randomize_targets as suggested).
> I see that I need a 1xP cell array (with P being number of subjects, right?) with approx 100 null datasets per subject.
> My question is: at which level do I generate these null datasets? For each subject separately during my first-level loop over subjects running cosmo_searchlight, I presume?
Yes indeed.
> I could then set up a loop within each subject invoking cosmo_randomize_targets 100 times, assign the output to ds.targets,
I think you mean ds.sa.targets.
> and run cosmo_searchlight on it 100 times)?
Yes, 100 times for each participant. I would advise to store these results on disk. With 30 participants, one way to do so is to store 3000 (=30 * 100) files with the output from cosmo_searchlight.
> Should I then stack those 100 datasets and put that stacked ds in the cell array for that subject? This will then result in a null dataset for each subject which has the same number of columns in ds.samples as the true dataset, but a different number of rows (30 in case of the true dataset, as I have 30 subjects, but 100 in case of each null dataset).
No, the idea is to have 100 null datasets, each with with 30 rows in .samples. Also: cosmo_montecarlo_cluster_stat should complain (raise an error) if an incorrect shape is used.