Computing morphometry measures from native FS surfaces resampled to HCP 164k mesh

285 views
Skip to first unread message

Daniel Drake

unread,
Feb 27, 2023, 12:12:53 AM2/27/23
to HCP-Users
Hi All,

We have a project that looks for MRI-based biomarkers for Parkinson's disease.  Subjects undergo multiple scans: T1w (but not T2w), rs-fMRI, DTI, etc.  From these scans, we derive many different ROI-based features that might be part of a composite PD-related signature.

We're using fMRIPrep for our resting state scans; the default FreeSurfer analysis of our T1w anatomical scan is part of that process.  While not strictly necessary, we would prefer that all our features were based on the same set of ROIs across all our scans, including morphometry measures derived from the FreeSurfer analysis.  Since there seems to be a larger array of possible atlases in the HCP format than the limited number provided by FreeSurfer, we're considering converting FreeSurfer output to the HCP 164k mesh.

To test the feasibility of converting to HCP space, we follow Part B of the PDF referenced below.  We resample the nativeFreeSurfer surfaces (e.g., surf/lh.white and surf/lh.white) and parcellations (e.g., label/lh.aparc.a2009s.annot) to the 164k HCP surface mesh.  We then recompute several morphometric quantities directly from the resampled, native 164k HCP white and pial surface meshes.  For example, we generate gray matter volume metrics on the HCP vertices via wb_command's -surface-wedge-volume command.  Similarly, we compute surface area metrics using wb_command's -surface-vertex-areas command.  Summing these quantities across nodes within each parcel of the (resampled) Destrieux (aparc.s2009.annot) atlas gives volume and area summaries that are near-identical to those provided by FreeSurfer's statistics.  However, I'm having trouble reproducing FreeSurfer's cortical thickness measure ("ThickAvg").  As I understand it, FreeSurfer computes the distance from pial nodes to the white matter surface (call it Dist1); and then again from the white matter nodes to the pial surface (Dist2), and then defines thickness as the average of these two distances at each node.  I attempted to reproduce this computation using using wb_command's -signed-distance-to-surface command (twice), and then averaging the absolute values via -metric-math.  When we compute the mean of this composite distance across nodes within each Destrieux parcel, the result, while in general similar to to the values FreeSurfer gives, is quite a bit more "noisier" than either the volume and area measures computed above.  I've read that FreeSurfer might use its smoothed white matter surface ("smoothwm") for one boundary of the thickness calculation, but we still end up with "noisy" estimates compared to FreeSurfer values.

I was hoping that someone could tell me:
1) is this even the right approach, or am I not even wrong?  For example, I've found discussions where HCP-based atlases are resampled to fsaverage space,
   (https://figshare.com/articles/dataset/HCP-MMP1_0_projected_on_fsaverage/3498446) and then mris_anatomical_stats is used on the new annot file (https://www.mail-archive.com/frees...@nmr.mgh.harvard.edu/msg68272.html).
2) am I computing thickness using the appropriate steps? (e.g., is -signed-distance-to-surface the command I should be using? are there steps I'm leaving out?)
3) should I even be trying to reproduce the FreeSurfer measures?  I read, for example, that in the HCP universe, it's preferable to regress out the curvature when computing thickness measures (https://cdn.fs.pathlms.com/w6X8w8GWTf6WDtn28Tgq).
4) is there a HCP/workbench-based script that will generate a table of morphometric measures similar to what freesurfer gives rather than attempt to reproduce them one by one as I do here?
5) ?

Any information you could provide would be very helpful.  I'm relatively new to surface-based analyses and feel like I've probably gone too deep down the wrong rabbit hole in attempt to cobble together a solution.

[[https://wiki.humanconnectome.org/display/PublicData/HCP+Users+FAQ?preview=/63078513/91848788/Resampling-FreeSurfer-HCP_5_8.pdf#HCPUsersFAQ-9.HowdoImapdatabetweenFreeSurferandHCP]]

Tim Coalson

unread,
Feb 27, 2023, 12:49:37 AM2/27/23
to hcp-...@humanconnectome.org
Signed distance doesn't try to find the distance along the cortical column direction, it finds the closest point on the other surface regardless of direction (and it may use any point inside any triangle, rather than being restricted to vertices, which has geometric implications based on curvature).  -surface-to-surface-3d-distance just uses the corresponding vertex of the other surface, so it is always along what freesurfer thinks the cortical column direction is (probably just the local normal vector of the white surface).  I'm not sure what approach freesurfer takes internally.

Tim


--
You received this message because you are subscribed to the Google Groups "HCP-Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hcp-users+...@humanconnectome.org.
To view this discussion on the web visit https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/1050b678-fdf5-427a-bb36-f483b095fe8bn%40humanconnectome.org.

Daniel Drake

unread,
Feb 27, 2023, 1:14:35 AM2/27/23
to HCP-Users, tim.c...@gmail.com
Thank you Tim.  I've tried to estimate cortical thickness from the -surface-to-surface-3d-distance as well, but the results are worse in the sense that the estimated mean thickness of an ROI is consistently larger than that computed by freesurfer.  At least with the signed distance approach, there doesn't seem to be a lot of bias.  (I have a set of PDF slides that illustrates what I'm trying to say---is it okay to attach external material in this group?)  I've also simply divided the computed volume by surface area to get an estimate of thickness---it's not too bad, but it seems I should be working from first principles.

Thanks again!  I really appreciate your input.
- Daniel

Glasser, Matt

unread,
Feb 27, 2023, 4:28:10 PM2/27/23
to hcp-...@humanconnectome.org, tim.c...@gmail.com

Regressing out cortical folding from thickness is mainly to focus on the differences in thickness across different cortical areas irrespective of cortical folding. 

 

As far as using HCP tools, it is easiest if you use them throughout (i.e., using HCP Pipelines with legacy modes, rather than using fMRIPrep), but ciftify might be helpful here.  Otherwise, you could modify the PostFreeSurfer script to get your subjects onto the HCP meshes.  For example, I recently imported some recon-all datasets using PostFreeSurfer (I think with a couple of small modifications that I need to put back into the HCP Pipelines).


Matt.

 


The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail.

Tim Coalson

unread,
Feb 27, 2023, 5:25:45 PM2/27/23
to Daniel Drake, HCP-Users
If you don't specifically have to compare to previously-measured numbers, I would probably consider thickness measured in the normal direction from the local cortical plane more valid than allowing whatever diagonal happens to give the shortest distance.  Considering that the cortex folds pretty sharply, and thickness changes might occur somewhere close to that fold, allowing diagonals (via signed distance) may increase the apparent size of a low-thickness region, in a non-uniform manner.

Tim

Daniel Drake

unread,
Feb 27, 2023, 5:39:51 PM2/27/23
to hcp-...@humanconnectome.org, tim.c...@gmail.com
Matt,

Thanks so much for your input. 

I did take a look at ciftify previously.  It took a little bit of fiddling on my part to get ciftify_recon_all to run.  From the lack of updates on the github site, I sort of got the impression that it wasn't maintained any more.  Glad to know that's still a viable option.

I wasn't aware that there was a legacy HCP pipeline; I just took it for granted that, without a T2w scan, that avenue was out of reach.  A quick search turned up this post (https://groups.google.com/a/humanconnectome.org/g/hcp-users/c/aMnfE89OqVQ), which is helpful.  I'll look into it again.

Regarding lack of fieldmaps: I'm working with some old DTI scans that don't have reverse-phase counterparts, precluding the use of FSL's topup in preprocessing. (topup uses a pair of blip-up blip-down B0 frames to correct for susceptibility distortion.)  The technique synb0-disco technique (https://github.com/MASILab/Synb0-DISCO) creates a synthetic, distortionless B0 from an initial distorted B0 and the T1w anatomical.  We feed the real B0 scan and the synthetic (undistorted) B0 into topup, indicating that the readout bandwidth of the synthetic scan is effectively infinite.  This, in effect, allows topup to find the warp that maps the distorted, true scan to the undistorted, synthetic one.  Just wondering if that would be of any use here.

Thanks again for your input; I really appreciate it.
- Daniel

You received this message because you are subscribed to a topic in the Google Groups "HCP-Users" group.
To unsubscribe from this topic, visit https://groups.google.com/a/humanconnectome.org/d/topic/hcp-users/JoJcarFny7I/unsubscribe.
To unsubscribe from this group and all its topics, send an email to hcp-users+...@humanconnectome.org.
To view this discussion on the web visit https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/E5B127C4-7969-4528-9F9E-F248A02032C2%40wustl.edu.

Glasser, Matt

unread,
Feb 27, 2023, 6:48:14 PM2/27/23
to hcp-...@humanconnectome.org, tim.c...@gmail.com

As I noted in the recent thread on the FSL list, that particular method may have promise.  They compared it to a gold standard correction, and it beat other methods.  You might be able to hack the diffusion preprocessing pipeline to work with it.  Once you manage to get topup to compute a field, you can convert that to a field map for other processing if you like.  

Daniel Drake

unread,
Feb 28, 2023, 9:50:33 AM2/28/23
to Tim Coalson, HCP-Users
Thanks Tim.  Your comments have made me think about the different kinds of situations that could result in local distance measures that aren't really representative of thickness as we imagine it.  It seems to me that the line joining corresponding nodes of the white matter and pial surfaces might better approximate the surface normal (aka 3D distance), but I'm not sure I can trust my intuition in this.  And the fact that that approach doesn't reproduce the freesurfer results confuses me somewhat.

I'm not particularly wed to using freesurfer metrics.  Matt mentioned the HCP legacy pipeline as an alternative (that doesn't need the T2w scan, which I don't have).  It may take me a while to figure out how to generate field maps.  As maybe an easier next step, I will investigate the use of ciftify, which seems to produce a number of metrics, including thickness.  (Though I don't know if they are derived directly from the native surfaces, or are simply interpolations of the freesurfer values on the resampled mesh.  I've found that the latter doesn't work as well as I might have expected.)

I so appreciate all the feedback!

Best,
- Daniel

Glasser, Matt

unread,
Feb 28, 2023, 10:04:19 AM2/28/23
to hcp-...@humanconnectome.org, tim.c...@gmail.com

The HCP Pipelines report FreeSurfer’s version of cortical thickness.


Matt.

 

From: Daniel Drake <dfd...@umich.edu>
Reply-To: "hcp-...@humanconnectome.org" <hcp-...@humanconnectome.org>
Date: Tuesday, February 28, 2023 at 8:50 AM
To: "tim.c...@gmail.com" <tim.c...@gmail.com>
Cc: HCP-Users <hcp-...@humanconnectome.org>
Subject: Re: [hcp-users] Computing morphometry measures from native FS surfaces resampled to HCP 164k mesh

 

 

Thanks Tim.  Your comments have made me think about the different kinds of situations that could result in local distance measures that aren't really representative of thickness as we imagine it.  It seems to me that the line joining corresponding nodes of the white matter and pial surfaces might better approximate the surface normal (aka 3D distance), but I'm not sure I can trust my intuition in this.  And the fact that that approach doesn't reproduce the freesurfer results confuses me somewhat.

Tim Coalson

unread,
Feb 28, 2023, 4:53:17 PM2/28/23
to Glasser, Matt, hcp-...@humanconnectome.org
Specifically, the HCP pipelines convert and resample the "lh.thickness" and "rh.thickness" files per-subject.  We don't touch any "ThickAvg" file, as far as I am aware.

Tim

Harms, Michael

unread,
Feb 28, 2023, 6:03:49 PM2/28/23
to hcp-...@humanconnectome.org, Glasser, Matt, Anderson Winkler

This blog post from Anderson Winkler 2018 lists several approaches for computing thickness, with references:

https://brainder.org/category/freesurfer/

 

So, there’s a whole literature on this topic if you want to dig into it.

 

Cheers,

-MH

 

-- 

Michael Harms, Ph.D.

-----------------------------------------------------------

Associate Professor of Psychiatry

Washington University School of Medicine

Department of Psychiatry, Box 8134

660 South Euclid Ave.                        Tel: 314-747-6173

St. Louis, MO  63110                          Email: mha...@wustl.edu

 

From: Tim Coalson <tim.c...@gmail.com>
Reply-To: "hcp-...@humanconnectome.org" <hcp-...@humanconnectome.org>
Date: Tuesday, February 28, 2023 at 3:55 PM
To: "Glasser, Matt" <glas...@wustl.edu>
Cc: "hcp-...@humanconnectome.org" <hcp-...@humanconnectome.org>
Subject: Re: [hcp-users] Computing morphometry measures from native FS surfaces resampled to HCP 164k mesh

 

* External Email - Caution *

Daniel Drake

unread,
Mar 2, 2023, 12:28:20 AM3/2/23
to hcp-...@humanconnectome.org, tim.c...@gmail.com
Okay, thanks Matt.  That helps clear up some uncertainty I had.

My attempts at resampling FS thickness onto the HCP 164k mesh and then looking at the averages per Destrieux region weren't great, as much as 10% relative error compared to FS values in some regions.  I figured that computing the metrics directly from the resampled surfaces (as opposed to resampling the FS measures) would be more self-consistent.

[I spent some time getting ciftify up and running]

I've attached some slides that illustrate what I'm seeing.  For three subjects, I looked at volume, area, and thickness.  I resampled the metrics on the resampled mesh via the PDF recipe ("Resampled"); and also computed the actual metrics from the resampled surface itself ("Computed").  In addition, for thickness, I show the results from ciftify_recon_all ("Ciftify") on the HCP 164k mesh.

Each point represents the summed (for area and volume) or averaged (for thickness) metric ROI from the Destrieux atlas; red and green denote left and right hemispheres.  The x-axis values are the appropriate column of  the FreeSurfer statistics table.

Surface area and volume, when recomputed on the resampled surfaces via the appropriate wb_command option, mimic the original FS values quite well.  Thickness, however, is quite a bit noisier.  I'm averaging the distance from the white matter vertex to the nearest pial surface and the distance from the pial vertex to the nearest white matter surface.  I've worked with other options, but this matches best. 

The ciftify_recon_all resampling for thickness isn't qualitatively different than the direct resampling or the computation.  I'm happy to go with the official ciftify results.  I'm just surprised there isn't a package somewhere that computes morphometric quantities directly from the HCP mesh.

I hope this makes sense and that I'm not wasting everyone's time.  Thank you all for your input!

Best,
- Daniel

You received this message because you are subscribed to a topic in the Google Groups "HCP-Users" group.
To unsubscribe from this topic, visit https://groups.google.com/a/humanconnectome.org/d/topic/hcp-users/JoJcarFny7I/unsubscribe.
To unsubscribe from this group and all its topics, send an email to hcp-users+...@humanconnectome.org.
To view this discussion on the web visit https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/61F10F65-3FAB-4325-A95D-B07EBA487A2D%40wustl.edu.
FS_to_HCP_no_images.pdf

Daniel Drake

unread,
Mar 2, 2023, 12:33:53 AM3/2/23
to hcp-...@humanconnectome.org, Glasser, Matt
Thanks for the input, Tim.  The `ThickAvg' is actually the column from the FS statistics that I'm using as the reference metric, not a file.  From your answer and Matt's, it seems that both Ciftify and HCP pipelines resample the freesurfer metrics onto the HCP meshes, and do not recompute the quantities directly from the HCP mesh.

- Daniel

You received this message because you are subscribed to a topic in the Google Groups "HCP-Users" group.
To unsubscribe from this topic, visit https://groups.google.com/a/humanconnectome.org/d/topic/hcp-users/JoJcarFny7I/unsubscribe.
To unsubscribe from this group and all its topics, send an email to hcp-users+...@humanconnectome.org.
To view this discussion on the web visit https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/CANm3QU4bYfvPm27U686saEo4zcVG33RENGS6FTDxK2sKcZ7oBQ%40mail.gmail.com.

Daniel Drake

unread,
Mar 2, 2023, 12:47:52 AM3/2/23
to hcp-...@humanconnectome.org, Glasser, Matt, Anderson Winkler
Hi Michael,

Thank you.  I came across that paper a couple of times while trying to reproduce the FS metrics but using the resampled surfaces.  Elsewhere in this thread I show the results of my attempts to compute thickness (and surface area and volume) directly from resampled *h.pial and *h.white surfaces (as opposed to resampling the FS-provided metrics) using wb_commands.

In computing thickness, I used the approach outlined directly under "Cortical Thickness" section, which I thought was the same computation underlying FS computations.  Tim and I have had a little discussion about how that approach might get a little wonky in regions of high curvature, where the nearest surface might not be along the near-normal as you might imagine.  In any event, I seemed to be more successful reproducing surface area and volume measures from the resampled surfaces than for thickness (based on the graphs attached in this thread.

I really do appreciate everyone weighing in.  Thank you.

- Daniel

You received this message because you are subscribed to a topic in the Google Groups "HCP-Users" group.
To unsubscribe from this topic, visit https://groups.google.com/a/humanconnectome.org/d/topic/hcp-users/JoJcarFny7I/unsubscribe.
To unsubscribe from this group and all its topics, send an email to hcp-users+...@humanconnectome.org.
To view this discussion on the web visit https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/3727F8C1-0C26-4D0A-AA29-1F90972BFB93%40wustl.edu.

Glasser, Matt

unread,
Mar 2, 2023, 10:23:14 AM3/2/23
to hcp-...@humanconnectome.org, tim.c...@gmail.com

I don’t understand the goal here.  If you want to use the same regions as FreeSurfer, why not just use the FreeSurfer values?  You could also compare the GIFTI /CIFTI results on the native mesh surfaces (so without the resampling but otherwise with our tools) to see if there is some difference in how the averaging is occurring.  For example, our recommended approach would be a weighted average accounting for vertex areas in the averaging.

 

Matt.

 

From: Daniel Drake <dfd...@umich.edu>

Daniel Drake

unread,
Mar 2, 2023, 11:12:39 AM3/2/23
to hcp-...@humanconnectome.org, tim.c...@gmail.com
We resampled because we wanted access to the wider variety of atlases in the HCP domain.  However, before proceeding with the newer atlases, I wanted to validate that I'd get the same results with the resampled versions that I would with the original freesurfer using the same atlas in either case.

> You could also compare the GIFTI /CIFTI results on the native mesh surfaces (so without the resampling but otherwise with our tools) to see if there is some difference in how the averaging is occurring.

This is effectively what the "Original" plots indicate: the x-axis are the results from the appropriate column of freesurfer's stats table; the y-axis values represent the computed average (for thickness) or sum (for volume and surface area) across the freesurfer metric values (converted to gifti) associated with the nodes within each ROI.  The results of the computations are identical to the freesurfer table to within the precision given in the table.  (I've actually done the averaging or summing of the metrics in R via ciftitools, but I imagine using wb_command -cifti-roi-average does the same---there is no indication that a surface needs to be involved to perform the averaging, only the resampling---but I'm happy to be corrected!)

Glasser, Matt

unread,
Mar 2, 2023, 11:24:06 AM3/2/23
to hcp-...@humanconnectome.org, tim.c...@gmail.com

I guess I remain confused as to the goal here.  If you have the measures you already need, just use them. 

 

What I suggested you do is not, in fact, what you did because we don’t know if FreeSurfer is doing a simple average or not.

 

In any case, a simple average is not the most correct thing to do.  Vertices (particularly on native meshes) do not represent equal amounts of the cortical surface.  Some represent larger areas and others smaller areas.  To make a regional average that is fully representative of the space subtended by that region, you want to weight the averages by vertex areas in the subject’s physical space.  This is trivial to do when using the HCP Pipelines, because we produce the needed files by default.  I don’t know if ciftify does this, but the commands are not hard wb_command -surface-vertex-areas (or even just use the surface itself wb_command -cifti-weighted-stats).  Importantly, these surfaces need to be in the subject’s physical space, i.e., in ${StudyFolder}/${Subject}/T1w, not be MNI transformed surfaces in ${StudyFolder}/${Subject}/MNINonLinear.  It is possible that FreeSurfer is doing something analogous to the recommended method above and if you are doing simple averages, you will not get the same answer.  You could ask the FreeSurfer folks if they take into account vertex areas in their stats. 

Daniel Drake

unread,
Mar 2, 2023, 2:05:06 PM3/2/23
to hcp-...@humanconnectome.org, tim.c...@gmail.com
I'm sorry Matt.  I really appreciate your feedback and your patience.  I feel like I'm not being clear enough (and possibly my mental model doesn't match reality)---my apologies.  And as I'm about to send it, I see what a wall of text this is.  Please feel free to disengage if necessary!

My goal: I would like summary thickness values (or other FreeSurfer metrics) for ROIs defined by an HCP-based atlas that FreeSurfer doesn't have.  I see two approaches: rely on metrics computed by FreeSurfer, or bypass the metrics entirely and (re)derive morphometric measures directly from a mesh.

Let's start with the latter:
You mentioned wb_command -surface-vertex-areas.  That is exactly the command I used to generate the "Computed" plots of surface areas directly from the resampled, white-matter native mesh, thus bypassing the FreeSurfer metrics entirely.  Similarly, I used  -surface-wedge-volume to compute the volume, also in the plots.  Surprisingly (?) they both match up very closely with the FreeSurfer derived values.  Unfortunately, I haven't found a similar wb_command (or sequence of commands) that I can apply to the resampled surfaces to derive similarly precise estimates of thickness.  My best attempt involves used -signed-distance-to-surface twice (once from white to pial, once from pial to white, and then averaging the absolute values as per Winkler, 2018).  FreeSurfer also generates other metrics that might be of interest, but it would take some trial and error to find the wb_commands required to reproduce them.

Perhaps the FreeSurfer code that computes these morphometrics is sufficiently general that it would work on an T1w/native HCP mesh.  I guess this would be ideal.  I'll try to figure that out.


Using FreeSurfer metrics:
I could resample an HCP atlas to the freesurfer mesh and derive ROI summaries on the metrics there; or, I could resample the FreeSurfer metrics to the HCP mesh.  I've opted for the latter.  I've done it "by hand" via the workbench commands outlined in part B of the "FAQ-9. How do I map data between FreeSurfer and HCP?" on the native T1w mesh, and now via ciftify.

Rather than just blindly accepting that everything was fine and applying our desired HCP atlas to the resampled results, I took two validation steps.

First, prior to resampling, I verified that I can reproduce the summary ROI values FreeSurfer provides by averaging *h.thickness within each ROI's nodes.  A straight, unweighted average gives exactly the values reported by FreeSurfer to within the precision of its table.  (The "Original" plots in the slides.)  As a consequence, I would deduce that FreeSurfer is employing an unweighted average (without taking mesh geometry into account) to go from its *h.thickness (or *h.area, or *h.volume) to the reported statistics in the table.

Second, I wanted to see if the ROI-based summaries reported by FreeSurfer could be reproduced in the resampled space.  Using the same ROIs (themselves resampled to HCP164k), I performed a straight, unweighted average on the resampled (using the native T1w mesh) metrics.  (Shown in the  "Resampled" panels in the slides.)  Note that I did not expect surface area and volume to match since I'm summing the interpolated FreeSurfer metrics over what (almost certainly) is a different number of nodes than were in the original FreeSurfer version of the same ROI.  Thickness, however, being an average, had a chance at being similar.  And it is, but just not as close as I would have expected, hence my original inquiry.

At this point I stopped and asked you all if this mild discrepancy was expected.

I agree that a weighted approach incorporating underlying mesh geometry to derive the summary metric values makes sense.  I very much appreciate you pointing out wb_command -cifti-weighted-stats.  I will look into that.


Oof.  Thank you, everyone, for bearing with me to this point and for all your input.  I will go hide away for a while so as to not waste more of your time.

Best,
- Daniel

Glasser, Matt

unread,
Mar 2, 2023, 2:18:35 PM3/2/23
to hcp-...@humanconnectome.org, tim.c...@gmail.com

I don’t think you should try to rederive the thickness measure.  I would just resample the standard FreeSurfer thickness measure to the appropriate mesh and then average within your desired ROis using the vertex areas.  I wouldn’t put more thought into this than that.

Tim Coalson

unread,
Mar 2, 2023, 8:04:58 PM3/2/23
to Daniel Drake, hcp-...@humanconnectome.org
For specifically parcellations (or any non-overlapping ROIs), you may also want to consider wb_command -cifti-parcellate.  The output is a parcellated cifti file, which you can view on the surface.  If you compute the difference between the original and computed roi-averaged thickness, you can put that into a parcellated cifti file and view it on the surface to see where the discrepancies are, to help figure out why they happen.

You could also compare the versions of the parcellations on fs_LR versus on the native mesh, to check if you have small areas getting their boundaries changed enough by the resampling (or medial wall masking?) to cause a discrepancy.

I didn't expect rh/lh.thickness to be derived from a signed distance approach, but it sounds like they may be.

Tim

Daniel Drake

unread,
Mar 3, 2023, 10:27:56 AM3/3/23
to Tim Coalson, hcp-...@humanconnectome.org
Thanks Tim, these are great suggestions.  I haven't quite got the hang of wb_view yet.  If I recall from my initial attempts, I wasn't able to load files individually, piecemeal, like I do for, say, AFNI.  I noticed that ciftify generates a .spec file for the each directory so wb_view understands what type of file it is loading.  I guess I need to do the same.

When Matt mentioned -metric-weighted-stats, I dug a little deeper.  It looks like -metric-weighted-stats works on one ROI per call, which is a little cumbersome to code up for a total parcellation.  Then I found -cifti-parcellate, as you pointed out, which seems to perform the operation (sum, mean, etc.) for each ROI in one call.  It has the ability to weight by area (-area-surface) so I think this essentially does what -metric-weighted-stats does, but in a more convenient package.  (Please correct me if I'm wrong!)

In my initial run of -cifti-parcellate, it seems to not treat the cifti hemispheres separately, which is sort of what I expected it to do: I've got the same 75 ROI indices in each hemisphere (direct from freesurfer giftis), and when I dump the resulting pconn.nii into a text file, there are only 75 values output instead of the 150 I expected.  Should I just boost, say, the indices in the right hemisphere by some offset greater than 75 so that -cifti-parcellate makes the distinction?  Or is there some more idiomatic way to do this?  Also, when dumping the pconn file to text, is there some way to get a second column that provides the corresponding label index, or label name?  It makes me a little nervous just relying on the ordering with no explicit indication what the true index is.

I realize I'm abusing your good will in asking what are probably trivial questions that I could find with additional searching.  I very much appreciate your and the community's willingness to help people like me get up to speed.

- Daniel

Glasser, Matt

unread,
Mar 3, 2023, 11:03:42 AM3/3/23
to hcp-...@humanconnectome.org, tim.c...@gmail.com

wb_command -cifti-parcellate is probably the better command, and does work with vertex area weighting.  You’d probably have to show us the commands and the contents of the files you are using in them to help further with your issue though.

Daniel Drake

unread,
Mar 3, 2023, 2:18:49 PM3/3/23
to HCP-Users, glas...@wustl.edu, tim.c...@gmail.com
Hi Matt,

Here's short script that
1) creates a cifti dscalar file from resample FS thickness giftis
2) creates a cifti dlabel file from resampled FS annotation file (aparc.s2009a)
3) uses cifti-parcellate weighted with resampled FS native/T1w space surfaces and generates a cifti pscalar file
(I may have mistakenly called this a pconn file in my previous message)
4) exports the pscalar file to text

I can provide data if you want it.

Each hemisphere has seventy five ROIs.  I expected -cifti-parcellate to maintain separation between hemispheres and produce 150 summaries, 75 per hemisphere.  Instead it gives me just 75 summaries, which I'm assuming are means across both hemispheres for each index.  Do I need to boost the indices of, say, the right hemisphere label  by adding 75 so that -cifti-parcellated understands these are distinct parcels?  Or is there a more idomatic way to do this that preserves the separation of the hemispheres?

Thanks, as always!
- Daniel

% --- code begins

# create metric cifti
# {l,r}h.thickness.shape.gii are FS metric files resampled to HCP164k using -metric-resample and ADAP_BARY_AREA method.

wb_command -cifti-create-dense-scalar \
        "sub-00001.resampled.thickness.164k_fs_LR.dscalar.nii" \
        -left-metric  "lh.thickness.shape.gii" \
        -right-metric "rh.thickness.shape.gii"

# create label cifti
# {l,r}h.aparc.a2009s.annot.label.gii are FS annotation files resampled to HCP164k using -label-resample with ADAP_BARY_AREA method.

wb_command -cifti-create-label \
          "sub-00001.aparc.a2009s.annot.164k_fs_LR.dlabel.nii" \
          -left-label  "lh.aparc.a2009s.annot.label.gii" \
          -right-label "rh.aparc.a2009s.annot.label.gii"

# -cifti-parcellate with surface area weighting
# {l,r}h.midthickness.surf.gii are FS surface files resampled to HCP164k using -surface-resample and BARYCENTRIC method.

wb_command -cifti-parcellate \
          "sub-00001.resampled.thickness.164k_fs_LR.dscalar.nii" \
          "sub-00001.aparc.a2009s.annot.164k_fs_LR.dlabel.nii" \
          2 \
          "sub-00001.resampled.thickness.164k_fs_LR.pscalar.nii" \
          -spatial-weights \
          -left-area-surf  "lh.midthickness.surf.gii" \
          -right-area-surf "rh.midthickness.surf.gii" \
          -method MEAN

# export pscalar

wb_command -cifti-convert -to-text  \
          "sub-00001.resampled.thickness.164k_fs_LR.pscalar.nii" \
          ./temp.txt

cat ./temp.txt


% --- code ends

The contents of temp.txt, when I read it into R:
> data.d <-read_csv("temp.txt", show_col_types=FALSE, col_names=FALSE)
> data.d

> # A tibble: 75 × 1
     X1
  <dbl>
1  2.56
2  2.69
3  2.51
4  2.91
5  2.83
6  3.03
7  2.99
8  3.06
9  3.57
10  2.95
# … with 65 more rows
# ℹ Use `print(n = ...)` to see more rows


Glasser, Matt

unread,
Mar 3, 2023, 2:22:35 PM3/3/23
to hcp-...@humanconnectome.org

Do you get any warnings with the second step?

Daniel Drake

unread,
Mar 3, 2023, 5:27:31 PM3/3/23
to hcp-...@humanconnectome.org
No warnings in creating the dlabel cifti file.

Glasser, Matt

unread,
Mar 3, 2023, 5:33:24 PM3/3/23
to hcp-...@humanconnectome.org

Please use wb_command -file-information on your .dlabel.nii file.

Daniel Drake

unread,
Mar 3, 2023, 5:58:45 PM3/3/23
to hcp-...@humanconnectome.org
This is what I get.  I'm worried about the vertex count; let me go back and make sure this is indeed the resampled version...

I'm leaving this here, but I totally messed up by using the giftified freesurfer files, not the resampled ones.  I'll use the correct files and try again.  So sorry!

[dfdrake@gl-login3 matt]$ wb_command -file-information sub-00001.aparc.a2009s.annot.164k_fs_L
R.dlabel.nii                                                                                
Name:                    sub-00001.aparc.a2009s.annot.164k_fs_LR.dlabel.nii                
Type:                    CIFTI - Dense Label                                                
Structure:               CortexLeft CortexRight                                            
Data Size:               1.10 Megabytes                                                    
Maps to Surface:         true                                                              
Maps to Volume:          false                                                              
Maps with LabelTable:    true                                                              
Maps with Palette:       false                                                              
Number of Maps:          1                                        
Number of Rows:          273988                                  
Number of Columns:       1                                        
Volume Dim[0]:           0
Volume Dim[1]:           0
Volume Dim[2]:           0
Palette Type:            None
CIFTI Dim[0]:            1
CIFTI Dim[1]:            273988
ALONG_ROW map type:      LABELS
ALONG_COLUMN map type:   BRAIN_MODELS
    Has Volume Data:     false
    CortexLeft:          137119 out of 137119 vertices
    CortexRight:         136869 out of 136869 vertices

Map   Map Name
  1   node label

Label table for ALL maps
       KEY   NAME                      RED   GREEN    BLUE   ALPHA
         0   ???                     0.000   0.000   0.000   0.000
         1   G&S_frontomargin        0.090   0.863   0.235   1.000
         2   G&S_occipital_inf       0.090   0.235   0.706   1.000
         3   G&S_paracentral         0.247   0.392   0.235   1.000
         4   G&S_subcentral          0.247   0.078   0.863   1.000
         5   G&S_transv_frontopol    0.051   0.000   0.980   1.000
         6   G&S_cingul-Ant          0.102   0.235   0.000   1.000
...
        70   S_precentral-sup-part   0.082   0.078   0.784   1.000
        71   S_suborbital            0.082   0.078   0.235   1.000
        72   S_subparietal           0.396   0.235   0.235   1.000
        73   S_temporal_inf          0.082   0.706   0.706   1.000
        74   S_temporal_sup          0.875   0.863   0.235   1.000
        75   S_temporal_transverse   0.867   0.235   0.235   1.000
                                                                 


Tim Coalson

unread,
Mar 3, 2023, 6:06:01 PM3/3/23
to Daniel Drake, hcp-...@humanconnectome.org
You need to have a surface file with the correct vertex count open before you can view surface data files, but other than that, you can simply file->open any cifti, gifti, or nifti file like you would expect.  spec files are useful for organization, but never required.

-metric-weighted-stats should actually operate on multi-column ROIs, giving one value per map in the ROI file.  However, -cifti-parcellate is usually better because you can vizualize the output.

Running wb_command -label-export-table on the dlabel file you used in -cifti-parcellate will give a text file with the labels in the order they are used in the parcellated file.  If you use python or matlab to load the parcellated cifti, rather than dumping its content to text, the header info will tell you exactly what each index in the matrix means (name and member vertices/voxels).

No, -cifti-parcellate does not separate parts of a label that occur in different structures.  You need to tell it exactly what should be considered a single parcel, meaning if you want to consider L_V1 and R_V1 as separate areas, you should give them separate names (and separate integers, or it will complain when combining them, but there are options to automatically resolve such conflicts by changing keys on the fly).  Yes, using an offset for this is a typical solution.  wb_command -gifti-label-add-prefix and -label-modify-keys (note: if a command doesn't say cifti or volume, it generally means gifti) may be useful here.

Tim

Daniel Drake

unread,
Mar 3, 2023, 6:22:03 PM3/3/23
to hcp-...@humanconnectome.org
Hi again, Matt.

I've rerun using the resampled data (the header of the new .dlabel file is below).  The output pscalar.nii file is similar, with just 75 values.

Tim just confirmed that -cifti-parcellate doesn't isolate the hemispheres when forming the summaries (which makes sense, as it gives the most flexibility).  He indicated which commands I can use to manipulate the label files.  Let me work on this over the weekend and give you guys a break. 

Again, thank you very much for the hand-holding.

Best,
- Daniel

[dfdrake@gl-login3 matt]$ wb_command -file-information sub-AARAII.aparc.a2009s.annot.164k_fs_
LR.dlabel.nii                                                                              
Name:                    sub-AARAII.aparc.a2009s.annot.164k_fs_LR.dlabel.nii                

Type:                    CIFTI - Dense Label                                                
Structure:               CortexLeft CortexRight                                            
Data Size:               1.31 Megabytes                                                    
Maps to Surface:         true                                                              
Maps to Volume:          false                                                              
Maps with LabelTable:    true                                                              
Maps with Palette:       false                                                              
Number of Maps:          1                                        
Number of Rows:          327684                                  
Number of Columns:       1                                        
Volume Dim[0]:           0
Volume Dim[1]:           0
Volume Dim[2]:           0
Palette Type:            None
CIFTI Dim[0]:            1
CIFTI Dim[1]:            327684

ALONG_ROW map type:      LABELS
ALONG_COLUMN map type:   BRAIN_MODELS
    Has Volume Data:     false
    CortexLeft:          163842 out of 163842 vertices
    CortexRight:         163842 out of 163842 vertices

Daniel Drake

unread,
Mar 3, 2023, 6:23:48 PM3/3/23
to Tim Coalson, hcp-...@humanconnectome.org
Thank you, Tim.  This so very helpful!  I have to step away, but  I'll work on fixing my labels and get through the next steps.

Thank you both Tim and Matt.
- Daniel

Daniel Drake

unread,
Mar 15, 2023, 3:35:52 PM3/15/23
to Tim Coalson, hcp-...@humanconnectome.org
Hi Tim and Matt,

I'm back.  I just want to say how much I appreciate your engagement and feedback.  Thank you.

To summarize: I wanted to compute aggregate freesurfer morphometry measures on the HCP 164k mesh.  Matt suggested I *not* compute the measures directly from the freesurfer surfaces resampled to the HCP mesh, and instead resample the actual metrics determined by freesurfer to the new mesh.

I plan to eventually use one of the HCP-based atlases, First, though, I wanted to measure consistency.  I compared area, volume, and thickness values reported by freesurfer for the Destrieux atlas regions with those obtained from resampled metric files produced by freesurfer and aggregated across the resampled version of the Destrieux atlas.


I was getting some inconsistency in the regional areas and volumes, and Matt pointed out that I should use an area-weighted mean (for thickness) or sum (for surface area and volume) rather than just straight averaging or summing the vertex values in each ROI.  (Using wb_command -cifti-parcellate with the -spatial-weights -left-surf-area and -right-surf-area option to get the weighted variants.) The results for thickness aren't particularly different, but the sum-based metrics (area and volume) are much better.

It's not clear which surface to use in the area-based weighting.  For area and volume, the midthickness surface seems to reproduce the freesurfer values pretty well, though there is more variability than I would have expected.  The white matter surface reduces the variability, but there is a bias: the slope relating the original freesurfer values to resampled ones is less than one.  I'm not sure where this is coming from.

You all have been super helpful so far.  I feel like I've gotten as far as I can get, and I'm resigned to using the areal weighting with the midthickness surface.  However, any suggestions or insights you might have would be very much appreciated.

Thank you all for your help!
- Daniel


20230315_FS_to_HCP.pdf

Tim Coalson

unread,
Mar 15, 2023, 5:12:54 PM3/15/23
to Daniel Drake, hcp-...@humanconnectome.org
To be clear, if your input data is vertex areas or vertex volumes and you are summing it across vertices, you should NOT use vertex areas as weights, you should do a simple sum to get the ROI's area or volume.  The vertex weights are mostly meant for when you are averaging things that do not implicitly depend on vertex size, to compensate for the sampling rate being somewhat nonuniform.  Weighting these sums means you are really summing something like the square of the vertex areas, which doesn't really mean much of anything.  If doing this is more similar to previous results, then something has gone seriously wrong somewhere.

Tim

Daniel Drake

unread,
Mar 15, 2023, 9:57:08 PM3/15/23
to Tim Coalson, hcp-...@humanconnectome.org
Hi Tim,

Regarding the double weighting: that's actually a relief to hear you say that weighting the area metric by the area (!) is a bad idea.  I didn't understand that when the areal-weighting was suggested, it was only in the context of thickness.  I can't explain the relatively linear relationship seen for area and volume in the weighted context for certain surfaces; the weighting by relative area might acting like a proxy for the mismatch in number of vertices in the original and resampled ROIs (see below).

In one of my very initial forays, I reasoned while FreeSurfer reports the sum of surface areas over an ROI, if I resample the area metric to a finer mesh (say double density), the sum of the resampled areas will be effectively twice what it should be.  So I adjusted the straight sum by the factor n_original/n_new.  In the attached slides I've created a row for that particular case, and the resampled area and volumes come out similar to the freesurfer values (though there seem to be some smaller area and volume areas for which n_original/n_new is large.  Unfortunately, when I eventually choose an HCP atlas that doesn't have an analog in FreeSurfer, I won't know what correction factor to use. 

This resampling to a different mesh reminds me of VBM or PET processing, where the jacobian of the transformation from native to MNI space is used to 'modulate' voxel values so that the sum across a transformed region is equivalent to what you'd get from a sum across the same region in the native space.  It seems like we need something like this here.

It dawns on me that instead of transforming surfaces and metrics to HCP space, I could just resample my chosen HCP atlas to FreeSurfer space and compute averages and sums there.  So maybe this has all been for naught (even though I learned quite a lot).

Thanks again for helping straighten out my mental model of how this all fits together.

Best,
- Daniel
20230315_FS_to_HCP_b.pdf

Tim Coalson

unread,
Mar 15, 2023, 10:08:47 PM3/15/23
to Daniel Drake, hcp-...@humanconnectome.org
I had just thought of what might have happened, which you just confirmed: vertex areas and vertex volumes should also generally not be resampled, they should be recomputed from the resampled surfaces.  In particular, any algorithm that resamples thickness or curvature properly will do something very wrong if you tell it to resample vertex areas or vertex volumes, because the vertices themselves are changing size and number, so trying to match the original values (which is what most resampling algorithms do) will give you the wrong answer.  Worse, though, the point of vertex areas is that they compensate for irregular meshes (like the freesurfer native mesh) where nearby triangles can have very different sizes, resulting in a lot of high-spatial-frequency content in native mesh vertex area/volume, and these standard resampling types can't preserve that information (and additionally, it is no longer relevant as-is on the new mesh anyway).

If this resulted in a much smoother map than the true vertex areas, then doing a weighted sum with the vertex areas may have slipped the vertex areas into the calculation that way, instead of the intended way, which might explain your finding, assuming the original vertex areas and volumes were somewhere close to 1 mm^2 or mm^3, as otherwise there would be a large scaling factor.  However, you saw this effect and added an ad-hoc adjustment for it, rather than realizing that resampling the vertex areas is a problem in itself, so that is the final piece of the puzzle.

So: use -surface-vertex-areas and -surface-wedge-volume on the resampled surfaces, don't resample the vertex areas or volumes (and maybe don't use VBM as a guide for good practice), and area/volume should fix itself.

Tim

Daniel Drake

unread,
Mar 15, 2023, 11:18:04 PM3/15/23
to Tim Coalson, hcp-...@humanconnectome.org
Hi Tim,

Thanks for the insight.  I noticed the high frequency variation in volume, and area especially, when I rendered the metrics on both the native and resampled surfaces.  As you suggest now, I had computed volume and area directly from the resample mesh and got near-perfect agreement (those slides must still be in this thread somewhere).  The rendering of the computed area and volume (i.e., derived from the resampled mesh with the wb_command options you pointed out) were much smoother, as you explain, even if the aggregate ROI values were near-identical.  However, my attempts to reproduce freesurfer's analog of cortical thickness didn't come out so cleanly (you and I discussed how freesurfer might be computing this value, computing pial-to-white and white-to-pial and then taking the average, etc.).  However, the computed results are not much different/worse than what I'm getting with the area-weighted resampling of ?h.thick, so maybe I should just go back to deriving all my measures directly from the resampled mesh and forget about resampling the freesurfer metrics completely.  (In fact, I'm sort of surprised no one has made the analog of freesurfer's mris_anatomical_stats for HCP surfaces.)

You don't know how much I appreciate your input.  I work remotely so I can't just go over to someone's lab and look over their shoulder to see how they do things.  I know it takes a lot of time to write everything out.  Thank you so much for your patience as I worked through all this.

Best,
- Daniel


Harms, Michael

unread,
Mar 18, 2023, 1:30:12 PM3/18/23
to hcp-...@humanconnectome.org, tim.c...@gmail.com, Harms, Michael

 

Just skimmed this thread, which is interesting.

 

I’m just going to comment that while the R^2 was certainly very high, I found it interesting that you didn’t find even tighter correspondence of your “Computed” thickness values (derived from the resampled HCP 164k FS surfaces) to the thickness values in the FS stats tables, given that your approach to that computation (i.e, using -signed-distance-to-surface and then averaging the result of using pial to white and white to pial as the “reference” surface) is *I think* the analog of how FS itself is computing thickness.  In which case, the difference would be reflective of the impact of the resampling of the surfaces themselves on that particular method of thickness quantification.

 

Absolutely no need to do this, but if you really wanted to nail down the issue of the algorithm for computing thickness, you could take the resampled HCP 164k FS surfaces, and compute thickness on those using FS’s own tool for computing thickness.  My prediction is that the resulting values would be basically identical to those from the “Computed” approach (using -signed-distance-to-surface).

 

Cheers,

-MH

 

-- 

Michael Harms, Ph.D.

-----------------------------------------------------------

Associate Professor of Psychiatry

Washington University School of Medicine

Department of Psychiatry, Box 8134

660 South Euclid Ave.                        Tel: 314-747-6173

St. Louis, MO  63110                          Email: mha...@wustl.edu

 

From: Daniel Drake <dfd...@umich.edu>


Reply-To: "hcp-...@humanconnectome.org" <hcp-...@humanconnectome.org>
Date: Wednesday, March 15, 2023 at 10:20 PM
To: "tim.c...@gmail.com" <tim.c...@gmail.com>

Cc: "hcp-...@humanconnectome.org" <hcp-...@humanconnectome.org>
Subject: Re: [hcp-users] Computing morphometry measures from native FS surfaces resampled to HCP 164k mesh

 

* External Email - Caution *

Hi Tim,

Harms, Michael

unread,
Mar 18, 2023, 1:44:21 PM3/18/23
to hcp-...@humanconnectome.org, tim.c...@gmail.com, Harms, Michael

Alternatively (and this might be simpler given the code you already have), you could take the *native* mesh HCP surfaces (which should be identical to the FS surfaces, just in a different file format) and run those through the -signed-distance-to-surface approach (with 2x average) to compute thickness.  In that case, my prediction would be that you almost perfectly replicate the values from the FS stats tables.

 

Cheers,

-MH

 

-- 

Michael Harms, Ph.D.

-----------------------------------------------------------

Associate Professor of Psychiatry

Washington University School of Medicine

Department of Psychiatry, Box 8134

660 South Euclid Ave.                        Tel: 314-747-6173

St. Louis, MO  63110                          Email: mha...@wustl.edu

 

Daniel Drake

unread,
Mar 19, 2023, 11:06:05 PM3/19/23
to hcp-...@humanconnectome.org, tim.c...@gmail.com, Harms, Michael
This (and your previous mail) is actually a very good suggestion.  Thank you Michael. 

I may have to revise my scripts a little: I still haven't found the limits of the wb_commands, but I'm using -cifti-parcellate to summarize the ROI metrics (whether resampled freesurfer ones or newly computed directly from the native HCP mesh).  I don't know if I can create a cifti with a mesh that doesn't conform to one of the 32k, ... , 164k standards.  If not, I might try to fall back on -metric-weighted-stats, something Tim mentioned as another possibility.

I have a few things coming up, but I hope to get back to you in a day or two.

Thanks again!
- Daniel

Harms, Michael

unread,
Mar 20, 2023, 9:55:35 AM3/20/23
to Daniel Drake, hcp-...@humanconnectome.org, tim.c...@gmail.com

Or, just compare the values in the resulting CIFTI directly.  If the values on the individual vertices are the same, then obviously any mean of those vertices within a given parcel would be the same (and you wouldn’t need to actually create the parcellation on the native mesh!).  We may also have one of the FS parcellations on the native mesh.  I’m not sure…

 

-- 

Michael Harms, Ph.D.

-----------------------------------------------------------

Associate Professor of Psychiatry

Washington University School of Medicine

Department of Psychiatry, Box 8134

660 South Euclid Ave.                        Tel: 314-747-6173

St. Louis, MO  63110                          Email: mha...@wustl.edu

 

From: Daniel Drake <dfd...@umich.edu>
Date: Sunday, March 19, 2023 at 10:08 PM
To: "hcp-...@humanconnectome.org" <hcp-...@humanconnectome.org>
Cc: "tim.c...@gmail.com" <tim.c...@gmail.com>, "Harms, Michael" <mha...@wustl.edu>
Subject: Re: [hcp-users] Computing morphometry measures from native FS surfaces resampled to HCP 164k mesh

 

* External Email - Caution *

Tim Coalson

unread,
Mar 20, 2023, 6:40:55 PM3/20/23
to Daniel Drake, hcp-...@humanconnectome.org, Harms, Michael
You can do that with -cifti-create-dense-scalar.  We don't usually advise this because we'd rather have most cifti files in circulation all using the same grayordinates space when possible, for ease of comparison.  But, as long as you are testing things, and aren't likely to make any popular pipelines start producing native-mesh cifti outputs, it should be fine.

Tim


Daniel Drake

unread,
Mar 21, 2023, 3:26:42 PM3/21/23
to Harms, Michael, hcp-...@humanconnectome.org, tim.c...@gmail.com
Hi Michael et al.

I performed the experiment suggested by Michael.  (Tim, thank you for letting me know that the cifti format is sufficiently flexible to hold non-HCP standard resolutions.  For what I present here, I wanted to stay closer to the source of the differences that are showing up and kept the analysis at the vertex level, as Michael suggested, so haven't used that capability here.)

tldr: the workbench command for area reproduces exactly the freesurfer values at each vertex of the non-resampled freesurfer mesh (for the white matter surface).  The same does *not* hold for volume or thickness.  Freesurfer seems to limit the thickness measures to 5mm, which might contribute to the "noise" I'm seeing in my estimates of aggregate values of thickness per ROI compared to freesurfer's reported values, whereas my volume estimates are more tightly in agreement with freesurfer's regional estimates.

I have one question: when converting a freesurfer metric to gifti using freesurfer's mris_convert function with the -c option, does it matter which surface I use?  (See step 2 in my code below)

Here's a table of percentiles of the reported (via freesurfer) and computed (using wb_commands); and the absolute and (fractional) relative differences for just one hemisphere or one subject.   (The commands I used to compute these values is provided below):

| metric    | quantity | hemi |    0% |   50% |   75% |   95% |   99% |   100% |
|-----------+----------+------+-------+-------+-------+-------+-------+--------|
| area      | reported | lh   | 0.005 | 0.674 | 0.891 | 1.222 | 1.542 | 17.889 |
| area      | computed | lh   | 0.005 | 0.674 | 0.891 | 1.222 | 1.542 | 17.889 |
| area      | absolute | lh   |     0 |     0 |     0 |     0 |     0 |      0 |
| area      | relative | lh   |     0 |     0 |     0 |     0 |     0 |      0 |
|-----------+----------+------+-------+-------+-------+-------+-------+--------|
| volume    | reported | lh   |     0 | 1.601 | 2.709 | 5.392 | 8.461 | 35.211 |
| volume    | computed | lh   |     0 | 1.606 | 2.716 | 5.401 | 8.428 | 34.447 |
| volume    | absolute | lh   |     0 | 0.058 | 0.141 | 0.431 | 1.036 | 30.164 |
| volume    | relative | lh   |     0 | 0.039 | 0.080 | 0.217 |   Inf |    Inf |
|-----------+----------+------+-------+-------+-------+-------+-------+--------|
| thickness | reported | lh   |     0 | 2.585 | 3.182 | 4.090 | 4.645 |      5 |
| thickness | computed | lh   |     0 | 2.510 | 3.080 | 3.993 | 4.664 |  6.182 |
| thickness | absolute | lh   |     0 | 0.027 | 0.063 | 0.395 | 0.758 |  2.829 |
| thickness | relative | lh   |     0 | 0.012 | 0.030 | 0.136 | 0.263 |    Inf |


Observations:
- the computation of area matches the original, freesurfer derived area almost exactly: the maximum relative difference is down at 10^-6 (and this is a percent value).  If I substitute smoothwm instead of the white matter interface, the maximum relative difference is in the hundreds of percent (so quite likely there are some very small triangles in "old" that are amplifying any difference in "new"-"old".

- the computed volume, unlike area, does not match freesurfer.  First, there are vertices that freesurfer reports as having zero volume associated with them---this causes the relative error at those vertices to blow up.  The relative error at the 95% percentile is 0.217, i.e., a relative difference of 22%.  The median relative error is about 4%. Results are worse using smoothwm as the lower boundary.

- the computed thickness, also, does not match freesurfer (thus answering Michael's query), though I'm using the approach supposedly used by freesurfer: a) compute distance from each pial vertex to nearest point on the white matter surface; b) compute distance from each white matter vertex to nearest point on the pial surface; c) at each corresponding vertex, compute thickness as the average of the two distances.  The relative error at the 99% percentile is just over 26%, down to 1.2% at the median.  Absolute differences here, however, can be pretty small: at the 75% percentile, the absolute difference is only 0.063 mm, which is pretty good!  It also looks like freesurfer puts a hard limit of 5mm on thickness. 

Conclusion: perhaps it's not surprising that the wb_command for area replicates what freesurfer uses, as that is perhaps the most straightforward calculation.  Not certain why volume and thickness don't match up with freesurfer values.  I'm pretty sure I'm reproducing the thickness computation correctly, at least based on what literature I could find about freesurfer's approach.  (However, there are several white matter surfaces that freesurfer produces: *h.white, *h.white.H, *h.white.K---I didn't investigate this).  The calculation of volume is a little more opaque, on both the freesurfer and HCP/workbench side.  (see for various approaches https://brainder.org/2018/01/23/how-do-we-measure-thickness-area-and-volume-of-the-cerebral-cortex)

%--

Computations:
1) convert freesurfer surfaces to gifti via mris_convert.  Example:
  mris_convert lh.white lh.white.surf.gii

2) convert freesurfer metrics to gifti via mris_convert (with -c flag).  Example:
  mris_convert -c lh.thickness lh.white lh.thickness.shape.gii

There was no resampling here, just conversion to gifti.  (As such, does it really matter which surface I use in step 2, i.e., lh.white as opposed to lh.midthickness or lh.smoothwm?)

3) compute area from gifti-converted surface, e.g.,
  wb_command -surface-vertex-area lh.white.surf.gii lh.area.white.shape.gii

4) compute volume between the white matter and pial surfaces, e.g.,
  wb_command -surface-wedge-volume \
      lh.white.surf.gii \
      lh.pial.surf.gii \
      lh.volume.white.shape.gii

5) compute thickness between white matter and pial surfaces
  wb_command -signed-distance-to-surface \
    lh.white.surf.gii \
    lh.pial.surf.gii \
    temp1.shape.gii

  wb_command -signed-distance-to-surface \
    lh.pial.surf.gii \
    lh.white.surf.gii \
    temp2.shape.gii

  wb_command -metric-math \
    "(abs(a) + abs(b))/2" \
    lh.thickness.white.shape.gii \
    -var a temp1.shape.gii \
    -var b temp2.shape.gii

6) compute the relative difference between the original and newly computed metrics:
  wb_command -metric-math \
    "abs(new-old)" \
    lh.area.white.absolute.shape.gii \
    -var old lh.area.shape.gii \
    -var new lh.area.white.shape.gii

  wb_command -metric-math \
    "abs(new-old)/old" \
    lh.area.white.relative.shape.gii \
    -var old lh.area.shape.gii \
    -var new lh.area.white.shape.gii


Glasser, Matt

unread,
Mar 21, 2023, 3:30:50 PM3/21/23
to hcp-...@humanconnectome.org, Harms, Michael, tim.c...@gmail.com

Yes FreeSurfer chooses to limit thickness to 5mm.

Daniel Drake

unread,
Mar 21, 2023, 3:34:56 PM3/21/23
to hcp-...@humanconnectome.org, Harms, Michael, tim.c...@gmail.com

Harms, Michael

unread,
Mar 21, 2023, 4:03:09 PM3/21/23
to Daniel Drake, hcp-...@humanconnectome.org, Harms, Michael, Coalson, Timothy

 

Hmm.  It seems like there must be some subtle difference(s) then between -signed-distance-to-surface and FS’s thickness computation.  I wonder if the difference is that -signed-distance-to-surface uses a normal from the ‘ref’ surface (reading its usage info just now), whereas FS’s approach might literally just find the closest vertex on the other surface, without regard to any surface normal?

 

For some reason, I had it in my head that -signed-distance-to-surface simply found the closest vertex as well…

 

-- 

Michael Harms, Ph.D.

-----------------------------------------------------------

Associate Professor of Psychiatry

Washington University School of Medicine

Department of Psychiatry, Box 8134

660 South Euclid Ave.                        Tel: 314-747-6173

St. Louis, MO  63110                          Email: mha...@wustl.edu

 

From: Daniel Drake <dfd...@umich.edu>


Date: Tuesday, March 21, 2023 at 2:28 PM
To: "Harms, Michael" <mha...@wustl.edu>

Tim Coalson

unread,
Mar 21, 2023, 5:14:09 PM3/21/23
to hcp-...@humanconnectome.org, Daniel Drake, Harms, Michael, Coalson, Timothy
It is not the closest vertex, it is the closest point within any triangle, and it does not restrict the direction based on surface normals.  The "normals" part is only one option to figure out whether the distance should be negative or positive (that is what "winding" means), but the even/odd test is more reliable, and thus the default and recommended.

Tim


Tim Coalson

unread,
Mar 21, 2023, 5:25:11 PM3/21/23
to Daniel Drake, Harms, Michael, hcp-...@humanconnectome.org
The wb_command computation of vertex volume uses the corresponding triangles of the two surfaces to build distorted triangular prisms, computes each one's volume with an approach suitable for arbitrary polyhedrons (based on integration of a vector field with constant expansion value, and averaging the two possible triangulations of any twisted quadrilateral face), and then assigns one third of each prism's volume to each member vertex:


Tim

Tim Coalson

unread,
Mar 21, 2023, 5:38:02 PM3/21/23
to hcp-...@humanconnectome.org, Daniel Drake, Harms, Michael
Ah, I hadn't realized this part of the help is where the confusion is:

"NOTE: this relation is NOT symmetric,
      the line from a vertex to the closest point on the 'ref' surface (the one
      that defines the signed distance function) will only align with the
      normal of the 'ref' surface."

This could be explained better - if the closest point to the reference surface is on a triangle, the line tracing that distance will necessarily be at the same angle as the triangle's normal vector.  This is simply an emergent property based on what it means to find the shortest distance to a plane.  However, if the closest point is on an edge or a vertex, the normal vector of the edge or vertex is not cleanly defined, and the range of locations for which an edge or vertex is the closest point is wedge or pyramid-shaped, and there are correspondingly a range of possible angles for a closest distance line, and this no longer holds for these edge/corner cases.

It is more of an offhand observation for the common case when close to a surface (to try to explain why it isn't symmetric), not an explanation of how it internally computes the distance, and perhaps rephrasing would help.

Tim

Tim Coalson

unread,
Mar 21, 2023, 5:41:58 PM3/21/23
to hcp-...@humanconnectome.org, Daniel Drake, Harms, Michael
Proofreading fail, the start of that should be "if the closest point on the reference surface is in the interior of a triangle".

Tim

Daniel Drake

unread,
Mar 21, 2023, 11:47:50 PM3/21/23
to Tim Coalson, hcp-...@humanconnectome.org, Harms, Michael
Thanks, Tim, for your clarifications on these issues.  Computationally, I imagine this was a pain to program---to deal with the edge cases when the nearest point is not on a surface.  Conceptually, however, it's pretty straightforward.  Which is why it's so surprising to me that the computed thickness doesn't match the freesurfer estimates using the same geometries.  Maybe I'm not using the right surfaces?  (Though switching to smoothwm for the inner boundary makes things worse.)  Maybe there's some small detail about how freesurfer computes thickness that isn't reported in the overall, general description?  I don't know.

As for volume, again, that the vertex-wise results don't match is a little puzzling.  According to <<https://brainder.org/2018/01/23/how-do-we-measure-thickness-area-and-volume-of-the-cerebral-cortex/>>, freesurfer implements the analytic method of Winkler (2018), which seems to be the method you described and linked to in the github repository.  So again, it makes me wonder if I'm somehow using the wrong surfaces, or if there's some hidden adjustment going on in freesurfer that no one mentions.  (I checked treported vs computed---there is no constant ratio relating the two, nor a constant offset.)

At the moment, I'm down to this (from attachment)
- I'll use area computed directly from the HCP mesh
- I'll use volume computed directly from the HCP mesh.   Unlike surface area, It doesn't have a one-to-one match vertex-wise.  However, when I compare the aggregate volume per ROI with what freesurfer provides, the results are pretty good.  In my mind, I'm sort of guessing that even though the vertex-wise volume assignments are different, they compensate for one another so that the aggregate sum is close to what it should be.
- I'll use the resampled thickness metric provided by freesurfer, and averaged using the areal weighting.  The results aren't as clean as for the two computed metrics above, but it seems the best I can do.

Thank you all for your patience and perseverance.  It's a testament to the community that you've stuck around so long.

Best,
- Daniel
20230315_FS_to_HCP_c.pdf

Tim Coalson

unread,
Mar 22, 2023, 5:04:49 PM3/22/23
to Daniel Drake, hcp-...@humanconnectome.org, Harms, Michael
The edge cases were a little tricky, but not too bad, they use projection 2D projection just like the interior case, and detect whether the projected point is on the "in" or "out" side of each 2D triangle edge, and look for the closest point on the edge when outside.  The point is then reconstructed in 3D and the distance measured in the obvious way (but doesn't have sign discrimination yet, which uses a second test).

However, multiple triangles generally have to be tested to find the one that has the true shortest distance, as you need to find the closest point in a triangle before you can be sure what its exact distance is (and therefore whether it beats another triangle that is at a similar distance), and to make this computationally efficient, I used an octree to check close triangles first, and only skip checking a triangle if the triangle's bounding box is known to be further away than the current best distance, so the result is consistent with checking all triangles.

I have no idea what approach freesurfer uses for signed distance, it wasn't entirely obvious how to make this reasonably fast without giving up some consistency.  Does freesurfer ever report a thinner value (ignoring the 5mm clamped cases) than the recomputed average wb_command signed distance approach on native mesh surfaces?  If so, that implies that if they are using signed distance, theirs sometimes reports an impossibly short distance (that can't reach the other surface), or my version has missed a possibility.

That blog post doesn't mention whether they account for quadrilaterals being twisted when measuring the volume, which could add a small difference.

Tim

Daniel Drake

unread,
Mar 23, 2023, 5:05:18 PM3/23/23
to Tim Coalson, hcp-...@humanconnectome.org, Harms, Michael
Hi Tim,

I didn't realize you were the actual author for all this.  Impressive!

I've attached some slides that compare the reported thickness (by freesurfer, *h.thickness), and the computed thickness (via wb_command signed distance X 2 then taking the average of absolute values).  For thicknesses less than 3mm (or so), wb_command computes a thickness that is less than that reported by freesurfer.  For larger thicknesses, it gets a little muddier, but the majority of the computed values are less than the freesurfer ones, still.

I hope this makes sense.  (I'm under the gun for another project, so this might be a little messy.)  If you have any questions, please let me know.

Off-Topic: Finally, I note (belatedly---I didn't do my homework) that some of the atlases I was contemplating using (e.g., Schaefer 2018) only come in fs_LR_32k resolution (at least that's all that I was able to find).  I guess this makes sense given their functional origin.  I realize I can upsample a 32k atlas to 164k (via -labeli-resample), but the suggested method is via ADAP_BARY_AREA method, using the midthickness surface for weighting.  Does that mean I should create a subject specific atlas for each subject?

Thanks again for all your help.
- Daniel

20230320_reported_vs_computed.pdf

Tim Coalson

unread,
Mar 23, 2023, 5:19:36 PM3/23/23
to Daniel Drake, hcp-...@humanconnectome.org, Harms, Michael
ADAP_BARY_AREA allows using an average midthickness vertex area file instead of a midthickness surface.  The main purpose of this method is to automatically deal with downsampling SNR issues, and while upsampling from 32k doesn't really require it, it is good practice.

We have precomputed average vertex area files available for fs_LR here:


Tim

Tim Coalson

unread,
Mar 24, 2023, 5:23:10 PM3/24/23
to Daniel Drake, hcp-...@humanconnectome.org, Harms, Michael
As for the distribution of thickness differences, I wonder how much of the "freesurfer shorter" stuff starting above 3mm comes from when one of the two directions reports larger than 5mm.  If they clamp it before averaging, that could account for some of it, though a discrepancy near 3mm implies "1mm for one surface, >5mm for the other", which is a geometrically strange situation (maybe somewhere at the edge of the medial wall?).

For the moment, I see no reason not to blame the discrepancies on freesurfer doing something that isn't strictly the signed distance function (they may have included some "angle from normal" criteria, and of course there is the hard stop at 5mm).

Tim

Harms, Michael

unread,
Mar 27, 2023, 4:20:08 PM3/27/23
to Daniel Drake, tim.c...@gmail.com, hcp-...@humanconnectome.org

I think I’ve lost track of something: Namely, if surface area has a “one-to-one match vertex-wise”, then why are none of the plots for Area in your results PDF on the line-of-identity?

 

@Tim: When we resample the thickness.mgz file on the FS native mesh onto the HCP 164k (or 32k) standard mesh, does that process itself account for the fact that the areal weighting associated with each vertex will differ between the native and standard meshes?

 

thx

Harms, Michael

unread,
Mar 27, 2023, 4:30:21 PM3/27/23
to hcp-...@humanconnectome.org, Daniel Drake

Could it be that FS’s computation is literally computing the distance to the *closest vertex* and not the distance to the imputed point in the plane of the closest *triangle*?  (That would indeed result in smaller values from wb_command signed distance, since the closest point in the overall triangle has less than or equal to that of the closest vertex).

 

-- 

Michael Harms, Ph.D.

-----------------------------------------------------------

Associate Professor of Psychiatry

Washington University School of Medicine

Department of Psychiatry, Box 8134

660 South Euclid Ave.                        Tel: 314-747-6173

St. Louis, MO  63110                          Email: mha...@wustl.edu

 

Daniel Drake

unread,
Mar 27, 2023, 4:51:01 PM3/27/23
to Harms, Michael, tim.c...@gmail.com, hcp-...@humanconnectome.org
Michael,

I agree that the recent plots don't bear out my assertion that I can reproduce freesurfer's area measurement.  Let me get back to you on that.

In the meantime, I plotted the thickness difference metric on the native surface in hopes of seeing where the extrema of the deviations were occurring.  (The color choice maybe isn't the best.)

- Daniel
20230327_reported_vs_computed.pdf

Tim Coalson

unread,
Mar 27, 2023, 5:00:55 PM3/27/23
to Daniel Drake, Harms, Michael, hcp-...@humanconnectome.org
That surface looks a bit funky, I have a feeling the segmentation has some issues.

There is a ring of negatives around the medial wall, where the surfaces suddenly start to follow the same coordinates, so sudden changes could be a factor.  Not sure what to make of the other differences.

Tim

Tim Coalson

unread,
Mar 27, 2023, 5:11:12 PM3/27/23
to Harms, Michael, Daniel Drake, hcp-...@humanconnectome.org
I believe we use ADAP_BARY_AREA for that, so yes, in the sense that smaller vertices will have a smaller influence on the output values.  The thickness measure should be fairly smooth on good surfaces, though, so it isn't a particularly important effect for that data.

All of the current wb_command resampling methods are interpolation methods, so none of them will do the "redistribute based on area, changing the data's mean to keep the sum constant" approach, if that is what you were asking.

Tim

Glasser, Matt

unread,
Mar 27, 2023, 5:43:10 PM3/27/23
to hcp-...@humanconnectome.org, Daniel Drake

I thought FreeSurfer just measured the distance between the corresponding white and pial surface vertices.


Matt.

Harms, Michael

unread,
Mar 27, 2023, 11:04:42 PM3/27/23
to tim.c...@gmail.com, Daniel Drake, hcp-...@humanconnectome.org

 

I guess what I was wondering is whether the resampling of the native mesh thickness file is such that you would get the same areal-weighted mean thickness for a parcel in either the 164k or 32k standard meshes as you would get using the native mesh?

 

-- 

Michael Harms, Ph.D.

-----------------------------------------------------------

Associate Professor of Psychiatry

Washington University School of Medicine

Department of Psychiatry, Box 8134

660 South Euclid Ave.                        Tel: 314-747-6173

St. Louis, MO  63110                          Email: mha...@wustl.edu

 

Harms, Michael

unread,
Mar 27, 2023, 11:06:17 PM3/27/23
to hcp-...@humanconnectome.org, Daniel Drake

I don’t think so.  Daniel’s understanding of what FS is doing is the same as my recollection.  Perhaps you have a link Daniel?

 

-- 

Michael Harms, Ph.D.

-----------------------------------------------------------

Associate Professor of Psychiatry

Washington University School of Medicine

Department of Psychiatry, Box 8134

660 South Euclid Ave.                        Tel: 314-747-6173

St. Louis, MO  63110                          Email: mha...@wustl.edu

 

From: "Glasser, Matt" <glas...@wustl.edu>
Reply-To: "hcp-...@humanconnectome.org" <hcp-...@humanconnectome.org>
Date: Monday, March 27, 2023 at 4:45 PM
To: "hcp-...@humanconnectome.org" <hcp-...@humanconnectome.org>, Daniel Drake <dfd...@umich.edu>
Subject: Re: [hcp-users] Computing morphometry measures from native FS surfaces resampled to HCP 164k mesh

 

* External Email - Caution *

Daniel Drake

unread,
Mar 27, 2023, 11:33:29 PM3/27/23
to Harms, Michael, tim.c...@gmail.com, hcp-...@humanconnectome.org
Hi Michael,

When we started this conversation, I was resampling freesurfer's area metric (computed on its native mesh) onto HCP's 164k mesh.  Then I'd sum up areas associated with each HCP vertex within a given ROI and compare it to the area computed on the same ROI on the original freesurfer mesh.  The sums would not match: the freesurfer areas interpolated onto the HCP mesh still reflected the density of the vertices on the original mesh, even though the vertices in the resampled mesh might be much more densely spaced (or more sparsely spaced).  (My thought experiment is this: I resample the relatively dense freesurfer mesh to HCP 32k; the interpolated area metric still reflects the density of the freesurfer mesh, but there are far fewer vertices to add together for a given ROI on the new mesh, so the computed areas won't match.)

The other alternative is to forego resampling the freesurfer metrics and compute the desired quantities directly from the resampled surface in HCP 164k.  Comparing ROI summaries, that worked very well for area, and pretty well for volume, but less well for thickness.

And then you suggested, in order to eliminate the uncertainty introduced by the resampling, to not even resample the surface at all, but instead compute metrics using wb_command and then compare them, vertex-wise, to what freesurfer had determined.  In this case, area matches exactly; volume isn't too far off (though there are some weird edge cases); and thickness has a funny skew.  (See the attached slides.)

Thickness is the more complicated of the measures, in some sense, as it requires (as best I've been able to determine) averaging the measurement of the distance from a white-matter vertex to the nearest point on the pial surface and the distance from the corresponding pial-vertex back to the closest point on the white matter.

I hope this clarifies any confusion.  I have a hard time keeping track, given all the experiments we've tried.

Thank you all for your patience.  And just to clarify: I've mostly made peace with the results I'm getting: I plan to use volume and surface areas computed directly from the resampled HCP 164k mesh, and to use freesurfer's thickness metric resampled to the new mesh and parcellated via cifti-parcellate using the area-correction option.  So while I very much appreciate the discussion and am willing to keep going, I think special edge cases hidden in the freesurfer code (.e.g, limited thickness to 5mm or less, for one) for computing these metrics may make it unreasonable to assume that more transparent, straightforward approaches implemented in wb_command can exactly reproduce what's going on in freesurfer.

Thank you all.
- Daniel
20230328_reported_vs_computed.pdf

Daniel Drake

unread,
Mar 27, 2023, 11:58:19 PM3/27/23
to Harms, Michael, hcp-...@humanconnectome.org
This is one of the references I've been going by:

The quote from the referenced Fischl and Dale (2000) paper (https://www.pnas.org/doi/10.1073/pnas.200033797):
"The thickness is computed as the average of this distance measured from each surface to the other."

I've seen claims that they use the smoothed white matter surface as the inner boundary, but when I tried it, the results weren't any better.

Daniel Drake

unread,
Mar 28, 2023, 12:02:54 AM3/28/23
to Harms, Michael, tim.c...@gmail.com, hcp-...@humanconnectome.org
From what I recall, the areal correction helps somewhat, but since the density of vertices is similar between the native freesurfer mesh and HCP 164k, the correction doesn't have a major effect (I tried with and without).  When resampling to a significantly more sparse mesh (like 32k), the correction might have more of an impact.

Tim Coalson

unread,
Mar 28, 2023, 12:24:19 AM3/28/23
to Harms, Michael, Daniel Drake, hcp-...@humanconnectome.org
Parcel-averaged thickness will be very close, simply because resampling doesn't have much effect on thickness values (doesn't have fine detail or hard edges, except near the medial wall), and you are averaging them over parcels anyway.  It won't be identical because of the smoothing effect of resampling, which will do some mixing of values across the border between parcels.

It is true that the normalization inside of the ADAP_BARY_AREA method uses the same vertex areas as are being used in the weighted average.  If you resampled the parcel ROIs and kept the fuzzy edges and used a deconvolution approach, you might get closer to identical, but I think there is still a multiply before vs after problem that will add nonlinear terms to the area-weighted answer, despite the resampling being purely linear weights.

Tim

Reply all
Reply to author
Forward
0 new messages