Handling S1200 HCP-YA Group Avg Surface Spaces - How to get from T1w/Native to MNINonLinear/fs_averageLR32k

132 views
Skip to first unread message

Reece Hill

unread,
Mar 2, 2026, 10:31:54 AM (3 days ago) Mar 2
to HCP-Users
Hi team,

Firstly, thank you for managing and making available the HCP-YA dataset.

I am currently working on a script that handles both subject-specific data and the group average. Downstream, I work on the MNINonLinear/fs_averageLR32k surfaces. However, I'd appreciate if somebody could advise how best to transform data from T1w/Native into MNINonLinear/fs_averageLR32k space.

From what I've read, I've gathered some ideas. Could you please verify that the below approach would maximise areal correspondence between modalities? Part of my work compares surface-projected diffusion metric against fMRI maps. As I understand, simply projecting to nearest vertex of an MSMAll surface would not be appropriate.

I use custom Python script to mimic the functionality of wb_commands...

Proposed approach
T1w-registered native voxel-wise data points -> MNINonLinear-registered native vertex data points
1. Diffusion data is acquired registered to T1w/T1w.nii.gz volume.
2. Voxel metrics are then "clamped" to the nearest T1w/Native/{subject}.[L|R].midthickness.native.surf.gii surface vertex by ribbon-constrained mapping. This yields surface points as (midthickness_native_face_id, barycentric_coordinates as w0,w1,w2).
3. As the topology between Native are identical, we use the indices directly to get into   MNINonLinear space by MNINonLinear/Native/{subject}.[L|R].midthickness.native.surf.gii 

MNINonLinear-registered native vertex data points -> MNINonLinear-registered MSMAll vertex data points
From here, is where I am most unsure...

4. Apply the indices and barys from [3] into MNINonLinear/Native/{subject}.[L|R].sphere.native.surf.gii. Then finding the nearest points to MNINonLinear/Native/{subject}.[L|R].sphere.MSMAll.surf.gii would yield coordinates registered to MSMAll?
5. How do we then get these to FS_LR 32k mesh? I've thought about applying MNINonLinear/fs_averageLR32k/{subject}.sphere.32k_fs_LR.surf.gii in similar fashion to [4] - but does this lose MSMAll alignment?

Then how would this work for S1200 Group Average data? I've read it only provides data in MNINonLinear space... But I can't find a sphere for MSMAll alignment? 

I feel like I'm close - but could somebody point me in the direction of which wb_commands (and which spheres!) are needed? 

Thank you in advance,
Reece 

Glasser, Matthew

unread,
Mar 2, 2026, 2:44:52 PM (2 days ago) Mar 2
to hcp-...@humanconnectome.org

I’m a bit confused here.  It seems you want to map data either onto the fs_LR32k MSMAll surfaces in the T1w folder (if you don’t need perfect precision) or if you do, onto the native surface in the T1w folder and then surface resample from the native mesh to the fs_LR32k mesh via the MSMAll registration.  I wouldn’t recommend trying to reimplement Workbench Command in Python.


Matt.

--
You received this message because you are subscribed to the Google Groups "HCP-Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hcp-users+...@humanconnectome.org.
To view this discussion visit https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/aac822f6-acd0-489d-bd4c-30e685b6df85n%40humanconnectome.org.

 


The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail.

Tim Coalson

unread,
Mar 2, 2026, 6:03:18 PM (2 days ago) Mar 2
to hcp-...@humanconnectome.org
For individual data that is already preprocessed (in alignment with either our T1w or MNINonLinear volume spaces), you can use this script (or take inspiration from it, if for some reason ribbon mapping's averaging of data across the cortical thickness isn't a good idea):


Cortical data should not be averaged across human subjects in the volume, because volume alignment isn't good at dealing with the variability of human folding.  We provide our group-average data in cifti format (cortical surface data and subcortical voxel data) rather than volume format for exactly this reason.  If you want more detail on this problem, see our paper on it: https://www.pnas.org/doi/10.1073/pnas.1801582115

Tim


Tim Coalson

unread,
Mar 2, 2026, 7:18:36 PM (2 days ago) Mar 2
to hcp-...@humanconnectome.org
Note in particular that only the MSMAll native sphere and the standard fs_LR sphere are involved in resampling the surface data.  The original (unregistered) native sphere is not useful if you just want to use an existing registration.

Tim

Reece Hill

unread,
Mar 3, 2026, 2:26:24 AM (2 days ago) Mar 3
to HCP-Users, tim.c...@gmail.com
Thanks, Tim. VolumeToCIFTI.sh looks like it's geared for continuous volumetric data - I'm mapping discrete volumetric data to surface nodes. In wb_command speak, this means -foci-resample I believe.

I have however followed VolumeToCIFTI.sh and it looks like what I am doing in Python is correct. 

Apologies to the team for any confusion. And thank you again for the pointers!

Tim Coalson

unread,
Mar 3, 2026, 3:52:08 PM (2 days ago) Mar 3
to Reece Hill, HCP-Users
foci are individual points.  If you have a full grid of voxels with categorical data (represented as integers), that is a label volume (or for something with equivalent information like a bunch of non-overlapping ROIs, I would recommend converting it to this single-map-of-integers representation for surface mapping and resampling).  If you import it to workbench format with -volume-label-import (to associate names and colors to those integers), you can then use -volume-label-to-surface-mapping to use a version of ribbon mapping adapted for categorical data (popularity logic), giving a .label.gii file, which can then be (resampled and) combined across hemispheres (and subcortical structures) into a .dlabel.nii cifti file.

I will note that ribbon mapping doesn't just pick the nearest vertex, nor does -metric-resample or -label-resample (unless you tell it to, which isn't the default, or recommended).  For instance, here is the adap_bary_area resampling weight computation:


For actual processing results, using a python approximation that doesn't go into these details is not what we would recommend.  If you are just trying to verify your understanding of how it approximately works, that is fine.

Tim

Reply all
Reply to author
Forward
0 new messages