surface-based searchlight in a ROI

222 views
Skip to first unread message

raffaele tucciarelli

unread,
Nov 3, 2017, 5:33:12 AM11/3/17
to CoSMoMVPA
Hi Nick and CoSMoMVPA users

I am venturing into the world of multivariate analysis at the surface level and of course, I wanted to use COSMOMVPA for this. I just have a couple of questions:

1) finding neighbours

my understanding is that one can a) use the volumetric functional data and the surfaces as input to cosmo_surficial_neighborhood to define neighbours or b) one can directly use the functional data at the surface level, right? 

[nbrhood,vo,fo,out2in]=cosmo_surficial_neighborhood(ds,surf_def, 'count',feature_count, 'center_ids', idx_roi);
% ds can be either volumetric or surficial data

Is one of the two methods better than the other? Or would the results look similar?


2) searchlight in an ROI
This is actually the most important point: which is the easiest way in COSMO to do a searchlight in specific surface-based ROIs? I see that in the cosmo_surficial_neighborhood one can specify the 'center_ids' and just look at those nodes of the surface, but what about the shapes of the searchlights? Do they follow the shape of the ROI (meaning would they remain into the ROI or would they still consider vertices also of the neighbouring regions)?


3) functional surface data in cosmo
If I decide to work directly with surface functional data: I got my surfaces using Freesurfer and I noticed that COSMO deals with AFNI/SUMA or Brainvoyager files at the moment (and gifti). 
What is the best approach, in this case, should I map my functional data from nifti to gifti and than load them in COSMO? 


thanks a lot!

Cheers,
Raffaele

Nick Oosterhof

unread,
Nov 3, 2017, 11:43:06 AM11/3/17
to raffaele tucciarelli, CoSMoMVPA
Greetings,

On 3 November 2017 at 10:32, raffaele tucciarelli <rtucci...@gmail.com> wrote:

I am venturing into the world of multivariate analysis at the surface level and of course, I wanted to use COSMOMVPA for this.

Excellent. Surface-based analysis has several advantages, maybe most importantly that it is anatomically more accurate as it takes into account the folded and thin nature of the cortex.
 
Not meant to discourage you, but still I feel I should mention it: bear in mind that this comes at a cost: in general the pipeline and analyses is more complicated and will take more work than traditional volume-based approaches. Also, good alignment between functional data and anatomy is more important in surface-based analysis than voxel-based analysis. 
 
I just have a couple of questions:

1) finding neighbours

my understanding is that one can a) use the volumetric functional data and the surfaces as input to cosmo_surficial_neighborhood to define neighbours or b) one can directly use the functional data at the surface level, right? 

[nbrhood,vo,fo,out2in]=cosmo_surficial_neighborhood(ds,surf_def, 'count',feature_count, 'center_ids', idx_roi);
% ds can be either volumetric or surficial data

Is one of the two methods better than the other? Or would the results look similar?

Yes, for information mapping you can do:

I) map data on the surface nodes first, then do MVPA no node data
II) use a surface-to-volume neighbourhood so that MVPA is done at voxels but results assigned to a node.

I have always used (II), the intuition being that in approach (I) information is lost due to (a) interpolation and (b) multiple voxels can be associated with a node location, and you'd have to choose how to select data in such case (nearest location? average value? maximum value?). 
 

2) searchlight in an ROI
This is actually the most important point: which is the easiest way in COSMO to do a searchlight in specific surface-based ROIs? I see that in the cosmo_surficial_neighborhood one can specify the 'center_ids' and just look at those nodes of the surface, but what about the shapes of the searchlights? Do they follow the shape of the ROI (meaning would they remain into the ROI or would they still consider vertices also of the neighbouring regions)?

It depends on how you define the ROI. When using cosmo_surficial_neighborhood, the ROIs are discs, or more precisely, curved cylinders that follow the curvature of the cortex (with the cylinder not very 'high' but with radius corresponding to the searchlight radius. 

If you have nodes defined on the cortex which together define an ROI and you want to know which voxels are associated with it, you can use the output from cosmo_surficial_neighborhood to get a surface-node-to-voxel mapping, then take all voxel indices that are associated with one or more nodes in that ROI. Just make sure you remove duplicates (with 'unique') so that you don't have the same voxels multiple times. 
 


3) functional surface data in cosmo
If I decide to work directly with surface functional data: I got my surfaces using Freesurfer and I noticed that COSMO deals with AFNI/SUMA or Brainvoyager files at the moment (and gifti). 
What is the best approach, in this case, should I map my functional data from nifti to gifti and than load them in COSMO? 

No need to map the functional data from nifti to gifti is you use a searchlight with node-to-voxel mapping (approach (II) mentioned above). For the node-to-node mapping (approach I), then yes, you could map functional voxel data to surface data - but note that you'll have to decide how to do the mapping. 
In general, GIFTI is a good format because it is aimed at being a general format that is supported by all major analysis packages, just as nifti does this for volume-based data. 

It may be the case, however, that the Matlab GIFTI library may have some trouble with data with more than one sample. I don't remember the details or know if something changed recently, but I do remember running into some problems a while ago. 

Does that answer your questions? 


raffaele tucciarelli

unread,
Nov 16, 2017, 4:40:49 AM11/16/17
to Nick Oosterhof, CoSMoMVPA
Hi Nick,
sorry, forgot to reply to this email.

Yes, now it is clear and I managed to run the analysis.

thanks a lot!
Raffaele

Nick Oosterhof

unread,
Nov 16, 2017, 2:02:38 PM11/16/17
to raffaele tucciarelli, CoSMoMVPA
Hi Raffaele,

On 16 November 2017 at 10:40, raffaele tucciarelli <rtucci...@gmail.com> wrote:
Yes, now it is clear and I managed to run the analysis.

Great to hear that.

best,
Nick

Sebastian Moguilner

unread,
Nov 20, 2017, 5:44:37 AM11/20/17
to CoSMoMVPA
Hi Nick,

I'm testing MEG timecourse searchlight in source space.
The input is an averaged ROI across space (i.e. just one signal per trial) on 1500 trials for condition A and 1500 trials for condition B.

Until now I'm not getting good accuracies (roughly 55% max accuracy).

It is ok to perform such analysis? Taking into account that there is no MVPA in space (just time neighbors).
If it can be done, do I have to make an extra preprocessing?

Regards


Nick Oosterhof

unread,
Nov 20, 2017, 11:32:17 AM11/20/17
to Sebastian Moguilner, CoSMoMVPA
On 20 November 2017 at 11:44, Sebastian Moguilner <seb...@gmail.com> wrote:
I'm testing MEG timecourse searchlight in source space.
The input is an averaged ROI across space (i.e. just one signal per trial) on 1500 trials for condition A and 1500 trials for condition B.

Until now I'm not getting good accuracies (roughly 55% max accuracy).

55% can be statistically very significant, or not significant at all. This depends amongst other things on how many trials and number of participants.
 

It is ok to perform such analysis? Taking into account that there is no MVPA in space (just time neighbors).

It's not wrong, but it would make more sense to me not to average the signals from different voxels because you might be losing information. Your analysis may be more sensitive if you leave out the averaging step.

Nick Oosterhof

unread,
Nov 25, 2017, 6:52:19 AM11/25/17
to Sebastian Moguilner, CoSMoMVPA


On 25 November 2017 at 11:56, Sebastian Moguilner <seb...@gmail.com> wrote:
Do you have in mind a reference to cite about the fact that channel based decoding is usually better than source based decoding?

I'm not sure if that is necessarily true. 

For whole-brain decoding, maybe yes; or at least in source decoding there is redundant information because usually there are more voxels than channels. Plus there is a (usually linear) transformation of the data. If that affects classification is probably classifier-dependent. If you're only interested in time (or time and freq) and not in spatial location, then I don't see any reason to transform the data to source space.

For ROI based analysis (ROI in space) however, source could in principle allow for stronger statements about the spatial location of the effect. However, that /does/ depend on an accurate source reconstruction and does depend on some assumptions, with different assumptions leading to different source solutions (e.g. MNE versus beam former).


Reply all
Reply to author
Forward
0 new messages