Parametric modulators (within session) or concatenate data?

1,044 views
Skip to first unread message

Jessica Mollick

unread,
Mar 4, 2013, 9:57:09 PM3/4/13
to wagerl...@googlegroups.com
Hi all,
I had a question about the effects of entering in parametric modulators in SPM in a multi-session design matrix. If you have the same parametric modulator for multiple runs, will the modulator values be normalized at all relative to its value on the other runs? If so, can you avoid this effect by concatenating onsets and parameteric modulators across runs?

My follow-up question is a little more abstract, so might be more difficult to answer, feel free to point me towards any useful resources. It is my understanding that SPM orthogonalizes all of your parametric modulators for the same condition relative to the first one you enter, though you can also do special tricks to make it not orthogonalize. If you want to compare the fit of different parametric modulators to each other, is it best to run a bunch of different models, each with only one of these parametric modulators, and compare them with some sort of model selection procedure, or compare modulators within the same model? Does the strategy change if the modulators are not orthogonal with each other, such as a value that is a sum of two other parametric modulators?

Thanks,
Jessica

Luke J. Chang

unread,
Mar 5, 2013, 10:51:19 PM3/5/13
to Jessica Mollick, wagerl...@googlegroups.com
Hi Jessica, 

See my comments below.

On Mar 4, 2013, at 7:57 PM, Jessica Mollick <jmol...@gmail.com> wrote:

Hi all,
I had a question about the effects of entering in parametric modulators in SPM in a multi-session design matrix. If you have the same parametric modulator for multiple runs, will the modulator values be normalized at all relative to its value on the other runs? If so, can you avoid this effect by concatenating onsets and parameteric modulators across runs?


I have no idea what spm does here, but if you concatenate runs, there will be no automatic normalization by run.  I hope SPM doesn't do this by default in multisession, because I'm not sure this is a good idea as the intercept will be sucking up variance associated with the run mean and the betas will no longer be comparable.  If you have a different parametric modulator by run you will allow the slopes to vary by run.  Otherwise you will only get one slope across all runs.  If you choose to run a different regressor for each run, then you will have to average the betas before you can move them up to the second level group analysis.

I defer to the spm users on this question.

My follow-up question is a little more abstract, so might be more difficult to answer, feel free to point me towards any useful resources. It is my understanding that SPM orthogonalizes all of your parametric modulators for the same condition relative to the first one you enter, though you can also do special tricks to make it not orthogonalize. If you want to compare the fit of different parametric modulators to each other, is it best to run a bunch of different models, each with only one of these parametric modulators, and compare them with some sort of model selection procedure, or compare modulators within the same model? Does the strategy change if the modulators are not orthogonal with each other, such as a value that is a sum of two other parametric modulators?


I believe this is correct that SPM orthogonalizes your regressors so that each preceding regressor is privileged to the shared variance.  This makes sense for certain types of analyses, particularly if you only have one parametric modulator and want shared variance to go to the stimulus regressor.  However, I think this is a bad default if you have multiple parametric regressors.  Instead, it is probably better to turn it off as a default, as I think it is more interesting to model the independent variance with respect to all other regressors.  This is one of the benefits of a multiple regression provided multicollinearity among regressors isn't too high.  There is info how to do this on the spm listserv and I believe Marieke has done this to her build on dream if you just want to set your path to her directory.  

I think there are a number of ways to compare regressors depending on your question. 

1) you could do a sort of stepwise procedure to see if the new regressor explains a significant amount of variance above and beyond the nested model using a model comparison approach.
2) you could directly contrast the regressors provided they were initially on the same scale (e.g., normalized)
3) you could probably do some sort of model selection to find the model that has the smallest goodness of fit (e.g., AIC, BIC, etc) at each voxel.  

You are going to have a problem accurately estimating your betas if you have highly correlated regressors such as one that is a linear combination of two regressors.  In this situation, it would be best to a) use the orthogonalization or b) not include the other two regressors if this is what you are primarily interested in.

-luke


Thanks,
Jessica

--
You received this message because you are subscribed to the Google Groups "WagerlabTools" group.
To unsubscribe from this group and stop receiving emails from it, send an email to wagerlabtool...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.
 
 

Tor Wager

unread,
Mar 6, 2013, 12:02:44 PM3/6/13
to Luke J. Chang, Jessica Mollick, wagerl...@googlegroups.com
Additional replies below!

On Mar 5, 2013, at 8:51 PM, Luke J. Chang wrote:

Hi Jessica, 

See my comments below.
On Mar 4, 2013, at 7:57 PM, Jessica Mollick <jmol...@gmail.com> wrote:

Hi all,
I had a question about the effects of entering in parametric modulators in SPM in a multi-session design matrix. If you have the same parametric modulator for multiple runs, will the modulator values be normalized at all relative to its value on the other runs? If so, can you avoid this effect by concatenating onsets and parameteric modulators across runs?


TW: Modulators will be CENTERED WITHIN RUN.  That means that you will lose the information about variation in the modulator values across runs. Depending on your application, this could be a big deal. You can avoid this by concatenating across runs, yes.  If you do so, you have to make sure to model the run intercepts using user-specified regressors!


My follow-up question is a little more abstract, so might be more difficult to answer, feel free to point me towards any useful resources. It is my understanding that SPM orthogonalizes all of your parametric modulators for the same condition relative to the first one you enter, though you can also do special tricks to make it not orthogonalize. If you want to compare the fit of different parametric modulators to each other, is it best to run a bunch of different models, each with only one of these parametric modulators, and compare them with some sort of model selection procedure, or compare modulators within the same model? Does the strategy change if the modulators are not orthogonal with each other, such as a value that is a sum of two other parametric modulators?


I believe this is correct that SPM orthogonalizes your regressors so that each preceding regressor is privileged to the shared variance.

TW: Yes.

 This makes sense for certain types of analyses, particularly if you only have one parametric modulator and want shared variance to go to the stimulus regressor.  However, I think this is a bad default if you have multiple parametric regressors.  Instead, it is probably better to turn it off as a default, as I think it is more interesting to model the independent variance with respect to all other regressors.  

TW: I agree.

This is one of the benefits of a multiple regression provided multicollinearity among regressors isn't too high.  There is info how to do this on the spm listserv and I believe Marieke has done this to her build on dream if you just want to set your path to her directory.  

TW: I believe Wani posted something on our WIKI about how to do this.

If you enter multiple non-orthgonalized PMs, you are testing whether there is unique variance in brain activity explained by each, controlling for the others.  if you want to see whether one type of PM fits better than another, you can run the two alternative models and compare the error terms.  It would be helpful to have a formal model comparison procedure implemented -- SPM may now have such tools (command line?), but I'm not sure.  A likelihood ratio test would be one way of doing it.  A simple approach is to run a paired t-test on the residual variance across subjects.

Tor

Wani (ChoongWan) Woo

unread,
Mar 6, 2013, 1:44:13 PM3/6/13
to Tor Wager, Luke J. Chang, Jessica Mollick, wagerl...@googlegroups.com
These are relevant info from wiki to turning off orthogonalization (see below). In order to turn off orthogonalization, I commented out lines calling spm_orth of spm_fMRI_design.m and spm_get_ons.m (the second option of the below). Then, you need to demean your PM on your own.
  • Another option is to create your own PM regs and enter them as user-specified regressors. (can also enter in batch using “multiple regressors” input option, which uses saved values from a .mat file).
  • Another option is to make SPM not orthogonalize. This involves commenting out some lines of code in 2 SPM functions. In spm_fMRI_design.m and in spm_get_ons.m, there are calls to spm_orth. Comment out those lines. Never do this on a shared SPM copy. Keep in mind that when turning off spm_orth the PMs will not get demeaned anymore either!


Jessica Mollick

unread,
Mar 6, 2013, 5:40:27 PM3/6/13
to Wani (ChoongWan) Woo, Tor Wager, Luke J. Chang, wagerl...@googlegroups.com
Thanks everyone for these helpful responses! I'll let you know if I have any follow-up questions.

-Jessica

Jessica Mollick

unread,
May 1, 2013, 2:20:03 PM5/1/13
to Wani (ChoongWan) Woo, Tor Wager, Luke J. Chang, wagerl...@googlegroups.com
As a follow-up question to this, what lines in in spm_orth and spm_fmri_design do people have commented out to do this? I commented out lines 277-279 in spm_fmri_design and line 228 of spm_get_ons and added the new scripts to my path, and the added in demeaned parametric modulators and ran an analysis. However the results of that analysis look exactly the same as what I get when I run an another analysis with spm defaults and non-demeaned pmods. Is there are way to check if your pmods are orthgonalized or not after running an analysis?

Tor Wager

unread,
May 1, 2013, 3:21:55 PM5/1/13
to Jessica Mollick, wagerlabtools
Hi Jessica,

On the last point, look at VIFs for the pmods alone. if they are orthogonal, the VIFs will be exactly 1.
getvif.m will do this.

Tor

Luke Chang

unread,
Sep 24, 2013, 5:55:34 PM9/24/13
to wagerl...@googlegroups.com, Tor Wager, Luke J. Chang, Jessica Mollick
I'm not an spm user, but I wanted to follow up with what Wani mentioned about making sure that you manually demean if you turn spm_orth() off. 

Does this mean:

1) demean the values before they go in the design matrix?

2) demean the values after they go in the design matrix (this will add a bunch of zeros and will change the mean)?

3) demean after they go in the design matrix and have been convolved?

I believe you will get three different regressors with these methods.  Also, this is more conceptual than spm specific, what actually makes the most sense?

-luke

Bob Spunt

unread,
Sep 25, 2013, 12:38:49 AM9/25/13
to Luke Chang, wagerl...@googlegroups.com, Tor Wager, Luke J. Chang, Jessica Mollick
Hi Luke - Hope all is well! I've dealt with this a fair amount
recently so I figured I'd share what I learned. Regarding spm_orth, I
just want to make sure folks know that there are actually two points
in the default SPM8 model building pipeline (see spm_fMRI_design.m)
where spm_orth is used to serially orthogonalize regressors: once
before convolution (which I believe is applied to the raw parameter
values and occurs at around line 228 of spm_get_ons.m), and once after
(I've pasted the relevant section of spm_fMRI_design.m below). I still
don't know why these two seemingly redundant orthogonalization steps
are built into the pipeline. I asked this question on the SPM list and
didn't get any clear responses other than a couple of folks basically
agreeing that the second call to spm_orth is redundant. In any event,
just wanted to point this out, since it means there are two calls to
spm_orth that need to be commented out to avoid the default serial
orthogonalization.

Regarding your second question, demean the raw parameter values.

I hope this helps! If you happen to have any thoughts on why there'd
be utility to orthogonalization both before and after convolution, I'd
be grateful to hear them.

Cheers,
Bob

Relevant code from spm_fMRI_design.m below:

% Get inputs, neuronal causes or stimulus functions U
%------------------------------------------------------------------
U = spm_get_ons(SPM,s);

% Convolve stimulus functions with basis functions
%------------------------------------------------------------------
[X,Xn,Fc] = spm_Volterra(U,bf,V);

% Resample regressors at acquisition times (32 bin offset)
%------------------------------------------------------------------
try
X = X((0:(k - 1))*fMRI_T + fMRI_T0 + 32,:);
end

% and orthogonalise (within trial type)
%------------------------------------------------------------------
for i = 1:length(Fc)
X(:,Fc(i).i) = spm_orth(X(:,Fc(i).i));
end

Luke Chang

unread,
Sep 25, 2013, 2:13:56 AM9/25/13
to Bob Spunt, wagerl...@googlegroups.com, Tor Wager, Luke J. Chang, Jessica Mollick
Thanks for the tip Bob!

I usually do the same thing, demean prior to going in the design matrix. I recently learned through a collaboration that brain voyager automatically centers all of their regressors after they have been convolved, which doesn't seem like a great idea.

Orthogonalizing by default also seems like a bad idea. The order matters and it's pretty easy to actually induce artifacts in your data. I've yet to hear a compelling example of when this would be necessary.

Hope things are going well at CalTech!

-luke
Reply all
Reply to author
Forward
0 new messages