On May 20, 2020, at 7:36 PM, Darren Yeo <jians...@gmail.com> wrote:
Hi Dr. Kay and colleagues,I am planning to use GLMdenoise to derive the optimal set of regressors, which I will then use to model my fMRI data better within BrainVoyager.In our experimental task, a run has 54 trials:9 exemplars x 3 trials each (total of 27 trials) where participants respond that a letter is present among a string of digits (e.g., 8A961)27 trials in total where participants respond that a letter is absent among a string of digits (e.g., 64713)We have 2-4 runs per subject, most with 4 runs.In our lab, we traditionally distinguish the correct trials and incorrect trials (i.e., commission errors and omission errors) to better model our data.For the 'design' input, I currently have for each run:9 regressors for correct ‘present’ trials (1 for each exemplar; 3 trials each minus any incorrect trials)1 regressor for correct ‘absent’ trials (27 trials minus any incorrect trials)1 regressor for commission error trials1 regressor for omission error trialsHere are the design matrices for 2 subjects (S1 with only 3 runs, and S2 with full set of 4 runs) and the conditions are in order of the 12 regressors mentioned above:
Due to this coding scheme, some runs have no remaining correct trials for one or more exemplar conditions (so these condition regressors are vectors of zeros), and some runs have no error trials (so the error regressors are vectors of zeros). This seems to pose a problem for the cross-validation if I were to use these design matrices.An alternative coding scheme that I am considering is a design matrix with the following to ensure that every run has the same set of conditions:9 regressors for correct ‘present’ trials (1 for each exemplar; all 3 trials each regardless of response accuracy)1 regressor for correct ‘absent’ trials (27 trials regardless of response accuracy)Then include "extraregressors" to code for error trials:1 regressor for commission error trials1 regressor for omission error trialsFor this alternative approach, I am worried about perfect collinearity between the exemplar condition regressor and one of the error regressors in any one run in which all the 'present' trials pertaining to an exemplar were incorrect trials.I was wondering if you could advise on whether this alternative coding scheme is more sound than the original one, and if there are things that I should be concerned about. I am also wondering if I should omit extraregressors that are just vectors of zeros from each run, or just leave them in there because it was documented that "The number of extra regressors does not have to be the same across runs, and each run can have zero or more extra regressors. If [] or not supplied, we do not use extra regressors in the model."I would also appreciate it greatly if you could provide an example of how extraregressors can be incorporated in the code (I don't suppose it matters, but I am using onset timings for the <design> input as the onsets do not exactly coincide with the TRs).Thank you!Best,Darren
--
You received this message because you are subscribed to the Google Groups "GLMdenoise" group.
To unsubscribe from this group and stop receiving emails from it, send an email to glmdenoise+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/glmdenoise/535ef59f-22e6-44f1-8dfa-fb7c3a8cc35e%40googlegroups.com.
<S1_designmatrices.png><S2_designmatrices.png>
Hi Darren,Thanks for the clear explanation of your paradigm.You touch on a number of issues which are collectively a bit tricky to think about, and there are many choices one can make...
Regarding the correct/incorrect trials - I am wondering what the intention is for these trials. For example, if the intention is that you just want to effectively ignore the response from incorrect trials, then that leads you down one path. Or if the intention is that you are specifically interested in estimating the response to incorrect trials that leads to other choices...At least the way GLMdenoise is setup, it acts as if you don't care about the beta weight associated with any extraregressors you give it. This was a design choice a long time ago (and maybe it should be different).
Regarding the columns of all zeros -- I am not sure it poses a huge problem. I think it might be all gracefully handled (for example, a column of all zeros is by default given a beta estimate of zero). Does the code run (without crashing)? If so, things are likely fine.
Regarding the collinearity issue, my understanding is that it might be the case (in one of the scenarios you lay out) that an extraregressor might be identical to one of your other design matrix columns. I agree this might be a problem. The code works by fully estimating (in the least squares sense) the beta weights for the extra regressors by projecting out these extra regressors from both the data and the other design matrix columns. So, I think what it will do in this case is assign all of the variance to the extra regressor and leave an essentially random design matrix column. But I'm not entirely sure what will ensue after that (might be okay or might be catastrophic). We may need to do a dbstop to trace things in the code...
Regarding the extraregressors, it should be as simple as passing an options struct like... struct('extraregressors',{A B C}) where A is a 2D matrix of time points x regressors, B is a 2D matrix of time points x regressors, and so on. The number of time points in A should correspond to what you have for run 1, the number of time points in B should correspond to what you have for run 2, and so on. The regressors in A, B, C are treated distinctly (i.e. the code estimates separate weights for the columns in A, the columns in B, etc.)
On a completely different note, perhaps a way to simplify a lot of these tricky issues is to simply estimate a separate beta weight (amplitude) for every trial you have in your experiment (and then deal with all the correct/incorrect stuff posthoc)? There is a new code function, GLMestimatesingletrial.m, that we are developing. It actually does GLMdenoise and several other major analysis magic, and it might be suitable for your needs. One thing, though, is that it requires the design matrix to be specified in lockstep with your TRs. If this is not currently the case, you could either round to the nearest TR, or upsample the fMRI time series data to better match your experiment. I can give you more details or discuss further, if you are curious
On a completely different note, perhaps a way to simplify a lot of these tricky issues is to simply estimate a separate beta weight (amplitude) for every trial you have in your experiment (and then deal with all the correct/incorrect stuff posthoc)? There is a new code function, GLMestimatesingletrial.m, that we are developing. It actually does GLMdenoise and several other major analysis magic, and it might be suitable for your needs. One thing, though, is that it requires the design matrix to be specified in lockstep with your TRs. If this is not currently the case, you could either round to the nearest TR, or upsample the fMRI time series data to better match your experiment. I can give you more details or discuss further, if you are curiousThis is potentially useful, especially for classification analyses. I will need to think more about whether it is suitable for my purpose, which is to perform RSA. In the final GLM that I will implement in BrainVoyager (for consistency with our other analyses that led to our a priori ROIs), it will be a single model with all 4 runs of the task so that I can have more repetitions of each exemplar (up to 12 trials) to reliably estimate their betas. I'll definitely consider that in future if classification analyses are on the table for this or other datasets I have. Thank you for pointing me to this new code function!