Ambiguous mismatch in voxel dimensions between data and mask

155 views
Skip to first unread message

lyam.m...@gmail.com

unread,
Apr 15, 2022, 12:17:18 PM4/15/22
to CoSMoMVPA
Dear users, 

I am trying to load an fMRI dataset in native subject space, and mask the dataset with an ROI that is also in native (functional) space, using comso_fmri_dataset(ds_fname, 'mask', mask_fname). ds = the functional dataset, mask= the ROI.

This returns an error: "voxel dimension mismatch between data and mask:max difference is 2.02620 > 0.00010"

I dug into ds.a.vol.mat (loaded without a mask) and mask.a.vol.mat. Sure enough, the matrices are _slightly_ different (see attached images). My understanding is that values on the diagonal of .a.vol.mat are the XYZ dimensions (and these values match between ds and mask), so what are the first three values in the 4th column? 

While debugging I tried copying those mis-matched values from ds.a.vol.mat into mask.a.vol.mat, and this allowed me to apply the mask as intended. So, it seems that the mis-matched values in .a.vol.mat are indeed the problem. Is it "safe" to hack this by copying over the values, or do I need to fix an underlying problem?

Thanks!


ds_dimensions.png
mask_dimensions.png

Nick Oosterhof

unread,
Apr 16, 2022, 10:01:43 AM4/16/22
to lyam.m...@gmail.com, CoSMoMVPA
Greetings,

> On 15 Apr 2022, at 18:17, lyam.m...@gmail.com <lyam.m...@gmail.com> wrote:
>
> I am trying to load an fMRI dataset in native subject space, and mask the dataset with an ROI that is also in native (functional) space, using comso_fmri_dataset(ds_fname, 'mask', mask_fname). ds = the functional dataset, mask= the ROI.
>
> This returns an error: "voxel dimension mismatch between data and mask:max difference is 2.02620 > 0.00010"
>
> I dug into ds.a.vol.mat (loaded without a mask) and mask.a.vol.mat. Sure enough, the matrices are _slightly_ different (see attached images). My understanding is that values on the diagonal of .a.vol.mat are the XYZ dimensions (and these values match between ds and mask), so what are the first three values in the 4th column?

The .a.vol.mat field contains the voxel-to-world mapping information, which is contained in an affine transformation matrix. See for details https://en.wikipedia.org/wiki/Transformation_matrix#Affine_transformations, expect that voxel data uses 3 dimensions instead of the 2 used in the wikipedia article. The first three values in the 4th column are the position offset. If you think of all voxels being contained in a box, then the coordinates of these three values are one of the eight corners; or more precisely one voxel away from one of the corners due to base-1 indexing.

>
> While debugging I tried copying those mis-matched values from ds.a.vol.mat into mask.a.vol.mat, and this allowed me to apply the mask as intended. So, it seems that the mis-matched values in .a.vol.mat are indeed the problem. Is it "safe" to hack this by copying over the values, or do I need to fix an underlying problem?

Copying the mis-matched values actually translates (moves) all voxels virtually. In your case about 0.5 mm in the first dimension and 2 mm in the second dimension. This does not seem ideal, and it would suggest something is not working ok in your processing pipeline. I would suggest you try to figure out how / where the difference in voxel position could happen.

Somewhat peculiar is the the position in the third dimension matches between functional volume and mask, if that were not the case I would have suspected you might have taken the mask from one participant’s brain and the functional data from another.

Best,
Nick.


lyam.m...@gmail.com

unread,
Apr 18, 2022, 10:07:30 AM4/18/22
to CoSMoMVPA
Hi Nick,

Thanks for the input! You've confirmed my suspicions about what is causing the problem. The ROI mask was generated in native structural space and then transformed to native functional space, using example_func from one of two functional runs as the reference image. Loading data from the run that was used as the reference works fine - I only get the error when trying to load data from the other run. The runs were quite long (about 12 mins each), so I suspect the participant moved a tiny bit between runs. I THINK this explains the mis-match in head position. So, looks like I'll have to create separate ROIs for each run.

Nick Oosterhof

unread,
Apr 18, 2022, 1:39:22 PM4/18/22
to lyam.m...@gmail.com, CoSMoMVPA
Hi Lyam,

On Mon, 18 Apr 2022 at 16:07, lyam.m...@gmail.com <lyam.m...@gmail.com> wrote:
Thanks for the input! You've confirmed my suspicions about what is causing the problem. The ROI mask was generated in native structural space and then transformed to native functional space, using example_func from one of two functional runs as the reference image. Loading data from the run that was used as the reference works fine - I only get the error when trying to load data from the other run. The runs were quite long (about 12 mins each), so I suspect the participant moved a tiny bit between runs. I THINK this explains the mis-match in head position. So, looks like I'll have to create separate ROIs for each run.

I don't see how that could work. I think you would want to run the motion estimation & correction using the same reference functional volume across all runs, and define your ROI just once for all runs from a participant. For example, the reference volume could be the 6th volume in the first run. 

If you define separate ROIs for each run, then voxels will be at a slightly different location (different voxel-to-world matrices in a.vol.mat), so you cannot do MVPA on them (or univariate analyses, for that matter). In other words, the locations of the voxels in a pattern would differ across runs. In CoSMoMVPA, you could use cosmo_stack to combine data from different runs, which will complain if you try to stack volumes in different spaces. 

I don't know what analysis pipeline you use, but AFNI has a quite elegant approach in align_epi_anat.py: it can estimate three affine transformations: (1) head movement, one transformation per volume, relative to one functional reference volume; (2) general transformation from functional reference volume to anatomical (EPI to T1); and (3) from T1 in participant space to template space (e.g. MNI). The naive approach would be to apply each transformation separately, resulting in three interpolation steps. Instead, the three transformations were combined into one transformation (for each functional volume) first --- using matrix multiplication of the affine transformation matrices --- requiring only a single interpolation step. In addition, you don't loose anything while your data is in template space, so group analysis can be done directly on the results and some tools like brain atlases are available to help interpreting the results. I don't know how much this matters for MVPA, but on theoretical grounds I found this approach quite appealing and have used myself as well. 

best,
Nick

Nick Oosterhof

unread,
May 5, 2022, 12:34:52 PM5/5/22
to Lyam Bailey, CoSMoMVPA
Hi Lyam,

You're welcome. However the approach of not registering volumes from different run to one single reference volume sounds strange to me, unless you do another co-registration step afterwards. Because due to head movement we can expect that the same voxel, across different runs, covers (slightly) different parts of the participant's head. 

I have very little experience with FSL, but looking at this document: 


there seems  to be the possibility for coregistration between anatomical and functional volumes, which (presumably) leads to properly aligned functional volumes across the runs. (Although FSL might register each run to the anatomical volume, I'm not sure but that could be suboptimal) 

This may suggest that using a single reference should be possible. I realise I might repeating myself here, but for the sake of the sensitivity and correctness of your analyses, I recommend you really try hard to set up your analysis in such a way that for each participant, all functional volumes across all runs are coregistered to one and the same functional volume. To my knowledge, all popular fMRI analysis packages such as AFNI, BrainVoyager, FSL, SPM support and recommend this. 

With respect to co-registration to MNI space: this is is not necessary when only doing ROI analysis, assuming that the ROI definitions do not require using an atlas. Examples include using a functional localiser or drawing ROIs by hand using anatomical landmarks. 

I hope this helps.

best,
Nick

On Mon, 2 May 2022 at 21:26, Lyam Bailey <lyam.m...@gmail.com> wrote:
Dear Nick,

(Apologies for the late reply, I got rather swamped by the end-of-term rush!)

I much appreciate the time you have put into thinking about my problem and proposing work-arounds. Unfortunately I am using FSL, which (to the best of my knowledge) requires motion estimation/correction for each run independently (indeed, this is what I do in my pipeline). So, I think I am stuck with functional data from each run in different (functional) spaces. I did try transforming the data to MNI space, but this yielded questionable results, which I think were in part due to reduced sensitivity from moving out of native space (although your last email seemed to suggest that moving to MNI does not reduce sensitivity?). Ideally I would like to stay in native space...I'm not planning to use cosmo_stack or do group-level analyses per-se, instead I'm planning to extract patterns from each ROI and correlate them between runs for each participant. 

My solution was to (for each participant individually) align functional data from each run with example_func from run#1, and then transform the ROIs to the same space. This seems to have worked, yielded less noisy results than I had got in MNI space, and allowed me to stay in native space. As you say, though, using cosmo_stack (or similar) would require everything to be in common (MNI) space. 
--
Lyam Bailey, B.Sc., M.Sc.
Doctoral Student
Department of Psychology & Neuroscience
Dalhousie University
Reply all
Reply to author
Forward
0 new messages