Hi all,I had a question about the effects of entering in parametric modulators in SPM in a multi-session design matrix. If you have the same parametric modulator for multiple runs, will the modulator values be normalized at all relative to its value on the other runs? If so, can you avoid this effect by concatenating onsets and parameteric modulators across runs?
My follow-up question is a little more abstract, so might be more difficult to answer, feel free to point me towards any useful resources. It is my understanding that SPM orthogonalizes all of your parametric modulators for the same condition relative to the first one you enter, though you can also do special tricks to make it not orthogonalize. If you want to compare the fit of different parametric modulators to each other, is it best to run a bunch of different models, each with only one of these parametric modulators, and compare them with some sort of model selection procedure, or compare modulators within the same model? Does the strategy change if the modulators are not orthogonal with each other, such as a value that is a sum of two other parametric modulators?
Thanks,Jessica--
You received this message because you are subscribed to the Google Groups "WagerlabTools" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wagerlabtool...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
Hi Jessica,See my comments below.
On Mar 4, 2013, at 7:57 PM, Jessica Mollick <jmol...@gmail.com> wrote:Hi all,I had a question about the effects of entering in parametric modulators in SPM in a multi-session design matrix. If you have the same parametric modulator for multiple runs, will the modulator values be normalized at all relative to its value on the other runs? If so, can you avoid this effect by concatenating onsets and parameteric modulators across runs?
TW: Modulators will be CENTERED WITHIN RUN. That means that you will lose the information about variation in the modulator values across runs. Depending on your application, this could be a big deal. You can avoid this by concatenating across runs, yes. If you do so, you have to make sure to model the run intercepts using user-specified regressors!
My follow-up question is a little more abstract, so might be more difficult to answer, feel free to point me towards any useful resources. It is my understanding that SPM orthogonalizes all of your parametric modulators for the same condition relative to the first one you enter, though you can also do special tricks to make it not orthogonalize. If you want to compare the fit of different parametric modulators to each other, is it best to run a bunch of different models, each with only one of these parametric modulators, and compare them with some sort of model selection procedure, or compare modulators within the same model? Does the strategy change if the modulators are not orthogonal with each other, such as a value that is a sum of two other parametric modulators?I believe this is correct that SPM orthogonalizes your regressors so that each preceding regressor is privileged to the shared variance.
This makes sense for certain types of analyses, particularly if you only have one parametric modulator and want shared variance to go to the stimulus regressor. However, I think this is a bad default if you have multiple parametric regressors. Instead, it is probably better to turn it off as a default, as I think it is more interesting to model the independent variance with respect to all other regressors.
This is one of the benefits of a multiple regression provided multicollinearity among regressors isn't too high. There is info how to do this on the spm listserv and I believe Marieke has done this to her build on dream if you just want to set your path to her directory.