Experiment design questions

78 views
Skip to first unread message

xe...@ucdavis.edu

unread,
Mar 13, 2019, 4:02:08 PM3/13/19
to GLMdenoise
Hi,

I have a question about the experiment design.

My experiment is a mix of both block and event related design. Saying there are total 8 conditions. The first condition has 20 successive trials (2s each). In spm, I used to specify the onset of first trial and set the duration as 40s. The rest 7 conditions are event related and has 0.5s stimulus duration.

I saw that the FAQ has answer for conditions with different duration. But it seems like only work for onsets fixed to volume onset? My stimulus onsets do not coincide with TRs of the data, so I have to use specific onset times. Under this condition, how should I set up the experiment design for 1st condition to use GLMdenoise?

Thank you so much for your help!
Xinger

Kendrick Kay

unread,
Mar 14, 2019, 9:48:23 AM3/14/19
to xe...@ucdavis.edu, GLMdenoise
Hi Xinger,

I think you have two possible approaches.  One approach is to resample your fMRI data to coincide with your experiment timing.  Lately, I have been typically doing it for all of our data.  I don't think this can hurt (at least, if you are slice time correcting your data, you are resampling/interpolating anyway, so it seems that there is no reason not to resample your data at the exact time point you want), and I think there are some potential benefits.  For example, if you resample to a TR that doesn't quite match your experiment, you are getting a little bit more temporal information, since the fMRI data is in effect "jittered" with respect to the experimental timing.   For a concrete example, one of our experiments has a 4-s trial structure, but our acquisition TR is 1.6s.  4 is not evenly divisible by 1.6, and what we do in pre-processing the data is to achieve, in a single step, both a correction for slice time differences and a upsampling of the data to an effective TR of 1 s. It's just a temporal interpolation through the time-series of each slice such that you are now getting an estimate of the time series signal at a sampling rate of 1 s.

Anyway, if you go with that approach, then I think your life might be easier.  (From GLMdenoise's point of view, it is able to do more flexible fitting (FIR-type model) if you can specify your stimulus design in terms of discrete multiples of the volume TRs.)  Then you could code your long condition as a "train" of stimulus onsets, say, every 0.5-s.  Like, your condition is 40-s long, so if you got everything into 0.5-s TRs, then you could just have a train of 80 stimulus onsets.   And then for your short events, you could just code them as a single stimulus onset (since the events are 0.5 s long).  (If resampling the data to 0.5-s makes the data too large, you could probably get away with just going to 1-s TR, and then just act as if your 0.5-s short events are the result of a 1-s-long neural event --- the reason being: the difference between the response to a 0.5-s stimulus and a 1-s stimulus in terms of the very slow HRF timecourse shape is likely negligible.)


The other approach is to specify all the event onsets in decimal second precision.  This approach would then code the long stimulus (40-s long) as a series of short events. For example, it could be coded as 80 events under the interpretation that each event that you specify represents the response to a 0.5-s stimulus.  (In the internals of the code, it then does the requisite convolution with a 0.5-s HRF and the summing over time so that in the end you get a "block-looking" HRF timecourse.)


Does this all make sense?

Kendrick






--
Kendrick Kay, PhD
Assistant Professor
Center for Magnetic Resonance Research (2-116)
University of Minnesota, Twin Cities
   Web: http://cvnlab.net
E-mail: k...@umn.edu
  Cell: 510-206-1059
 Skype: kendrickkay

--
You received this message because you are subscribed to the Google Groups "GLMdenoise" group.
To unsubscribe from this group and stop receiving emails from it, send an email to glmdenoise+...@googlegroups.com.
To post to this group, send email to glmde...@googlegroups.com.
Visit this group at https://groups.google.com/group/glmdenoise.
For more options, visit https://groups.google.com/d/optout.

xe...@ucdavis.edu

unread,
Mar 15, 2019, 3:40:09 PM3/15/19
to GLMdenoise
Hi Kendrick,

This makes sense! I'll try to resample my fMRI data. Thank you so much for your quick reply!

Best,
Xinger 


On Thursday, March 14, 2019 at 6:48:23 AM UTC-7, Kendrick Kay wrote:
Hi Xinger,

I think you have two possible approaches.  One approach is to resample your fMRI data to coincide with your experiment timing.  Lately, I have been typically doing it for all of our data.  I don't think this can hurt (at least, if you are slice time correcting your data, you are resampling/interpolating anyway, so it seems that there is no reason not to resample your data at the exact time point you want), and I think there are some potential benefits.  For example, if you resample to a TR that doesn't quite match your experiment, you are getting a little bit more temporal information, since the fMRI data is in effect "jittered" with respect to the experimental timing.   For a concrete example, one of our experiments has a 4-s trial structure, but our acquisition TR is 1.6s.  4 is not evenly divisible by 1.6, and what we do in pre-processing the data is to achieve, in a single step, both a correction for slice time differences and a upsampling of the data to an effective TR of 1 s. It's just a temporal interpolation through the time-series of each slice such that you are now getting an estimate of the time series signal at a sampling rate of 1 s.

Anyway, if you go with that approach, then I think your life might be easier.  (From GLMdenoise's point of view, it is able to do more flexible fitting (FIR-type model) if you can specify your stimulus design in terms of discrete multiples of the volume TRs.)  Then you could code your long condition as a "train" of stimulus onsets, say, every 0.5-s.  Like, your condition is 40-s long, so if you got everything into 0.5-s TRs, then you could just have a train of 80 stimulus onsets.   And then for your short events, you could just code them as a single stimulus onset (since the events are 0.5 s long).  (If resampling the data to 0.5-s makes the data too large, you could probably get away with just going to 1-s TR, and then just act as if your 0.5-s short events are the result of a 1-s-long neural event --- the reason being: the difference between the response to a 0.5-s stimulus and a 1-s stimulus in terms of the very slow HRF timecourse shape is likely negligible.)


The other approach is to specify all the event onsets in decimal second precision.  This approach would then code the long stimulus (40-s long) as a series of short events. For example, it could be coded as 80 events under the interpretation that each event that you specify represents the response to a 0.5-s stimulus.  (In the internals of the code, it then does the requisite convolution with a 0.5-s HRF and the summing over time so that in the end you get a "block-looking" HRF timecourse.)


Does this all make sense?

Kendrick






--
Kendrick Kay, PhD
Assistant Professor
Center for Magnetic Resonance Research (2-116)
University of Minnesota, Twin Cities
   Web: http://cvnlab.net
E-mai...@umn.edu
  Cell: 510-206-1059
 Skype: kendrickkay
Reply all
Reply to author
Forward
0 new messages