AABC Unprocessed Imaging Files Inquiry

177 views
Skip to first unread message

Brodnick, Zachary

unread,
Jan 4, 2026, 2:59:34 PMJan 4
to hcp-...@humanconnectome.org
Hello,

I have downloaded the AABC structural preprocessed data package and plan to download the concatenated preprocessed fMRI package as well. I am mainly interested in rs-fMRI and sMRI data.
I was wondering whether unprocessed T1-weighted (T1w.nii) and resting-state fMRI (rs-fMRI.nii) files are available for analysis, as I intend to use an alternative preprocessing pipeline (Ex: Halfpipe). Also are there any additional files available like .json files and field maps? 
The "ABC Release 1 Imaging Data Packages" PDF documentation indicates that unprocessed images are available; however, I have been unable to locate these files within the downloaded directory structure. If those files do exist- any help as to where those files are located when I unzip the participant's folder in both the Structural and fMRI packages would be greatly appreciated!
If unprocessed files are not available- which available files, would you recommend using for 1st and 2nd level analyses? 
Best regards,
Zach

The Ohio State University 

Zach Brodnick

Graduate Research Associate

Neuroscience Graduate Program (NGP)

The Ohio State University 

Glasser, Matthew

unread,
Jan 4, 2026, 3:09:40 PMJan 4
to hcp-...@humanconnectome.org

There are unprocessed data, but it will be difficult to achieve a similar level of preprocessing quality as the recommended data.  That would take many years of software development with contributions from a consortium of world experts in neuroimaging methods.  Additionally, your results will be out of step with everyone else’s who use the data.  Looking briefly at the Halfpipe paper, it is clear that the methods used are inferior to those from the HCP Pipelines that were used to process the released AABC data and will in fact damage HCP-Style data acquisitions such that they do not maintain the fidelity of spatial localization and the quality of temporal signals that they originally had.  Perhaps it would be good to understand why you want to process the data a different way.  In general, while we welcome improvements to HCP-Style data processing, we are less enthusiastic about regressions in data quality.


Matt.

--
You received this message because you are subscribed to the Google Groups "HCP-Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hcp-users+...@humanconnectome.org.
To view this discussion visit https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/DS7PR20MB42075607E6E701F5872006D987B9A%40DS7PR20MB4207.namprd20.prod.outlook.com.

 


The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail.

Harms, Michael

unread,
Jan 5, 2026, 10:47:03 AMJan 5
to hcp-...@humanconnectome.org

 

Regarding the availability of the "unprocessed" data, make sure that you have set "Recommended = All" in the Selected Imaging Packages section on BALSA if you want to see all available packages.

 

Cheers,

-MH

 

-- 

Michael Harms, Ph.D.

-----------------------------------------------------------

Professor of Psychiatry

Washington University School of Medicine

Department of Psychiatry, Box 8134

660 South Euclid Ave.

St. Louis, MO  63110

Zach Brodnick

unread,
Jan 29, 2026, 12:26:00 PM (5 days ago) Jan 29
to HCP-Users, mha...@wustl.edu
Hello again, 

Thank you both for answering my question. In regard to using the recommended data from the AABC Release 1 Data (and eventually AABC Release 2) I had a few questions about the files types. 

I am planning on using the "minimally preprocessed" and wanted to make sure I am using the correct files when uploading those into software like CONN Toolbox to run analyses like Seed Based Correlations + Functional Connectivity Analyses (since you can use preprocessed data in that program). Additionally, I am doing volumetric structural analyses with the T1w data.

Resting Data:
For the structural data I am under the assumption that data like {StudyFolder}/${Subject}/MNINonLinear/T1w_restore_brain.nii.gz file are the correct MNI space T1w images (also since it is skull stripped). 
For the resting data I am under the assumption that data like {StudyFolder}/${Subject}/MNINonLinear/Results/rfMRI_REST1_AP/rfMRI_REST1_AP_hp0_clean_rclean_tclean.nii.gz are the cleaned data (477 volumes) can be used. 
Uploading the Movement_Regressors.txt (Friston-12 Format) for the subject's 12 motion regressors 

What I aim to do in CONN:
Apply 4 mm FWHM spatial smoothing and, structural segmentations of the T1ws to obtain GM/WM/CSF estimates. Then for denoising, include aCompCor (5 WM + 5 CSF PCs), motion regression (12 params: 6 motion + derivatives), ART scrubbing using a 95th-percentile threshold, linear detrending, and band-pass filtering at 0.008–0.09 Hz. 

Volumetric Structural Data:
For the structural data I believe the {StudyFolder}/${Subject}/T1w/T1w_acpc_dc_restore.nii.gz files are the appropriate files since VBM related toolboxes (Ex: CAT12) need native space, non-segmented and non-skull stripped images. 

Based on reading and my understanding of the Glass et al. 2013 Minimal Preprocessing HCP Pipeline Paper, I believe these analyses can work with your data. Please correct me if I am wrong. 

Again, many thanks for always being so helpful!
-Zach

The Ohio State University 

Zach Brodnick

Graduate Research Associate

Neuroscience Graduate Program (NGP)

The Ohio State University

Tim Coalson

unread,
Jan 29, 2026, 6:29:59 PM (5 days ago) Jan 29
to hcp-...@humanconnectome.org, mha...@wustl.edu
Smoothing is generally not a good processing strategy when there are other options such as group analysis or dimensionality reduction, because smoothing damages spatial specificity (additionally, group analysis of human cortical data should be done on the surface, because volume registration doesn't know how to handle the substantial individual variability in cortical folding of large portions of human cortex).  Typical volume-based smoothing in particular doesn't respect the fact that opposing sulcal banks are not directly connected, and also "wastes" some of your signal into white matter and CSF.  Our paper critiquing volume-based group cortical analysis touches on these topics: https://www.pnas.org/doi/10.1073/pnas.1801582115 .  We recommend the cifti-format versions of our cleaned data if you want to do correlation analyses that include cortex, as it naturally allows highly specific comparison or combination across subjects.

sICA and tICA have already removed artifacts from ("denoised") the data, and we determined that motion regression was not beneficial when using these (motion regression removed some neural signal without appreciable benefit to artifact removal).  WM and CSF signal, when using standard masks, are often contaminated with enough gray matter signal that it dominates any other effect (via partial voluming, pointspread function, and simple mask errors), resulting in accidental regression of neural gray matter signal out of the data.  tICA is a more selective method to remove global artifacts, such as respiration effects, without removing neural signal.

Our ICA cleanup steps are also effective at removing what motion scrubbing would remove, but more selectively (and helps "fix" artifacts that motion scrubbing would leave in as below-threshold).  We have not found bandpass filtering to be helpful, linear detrending is generally enough to deal with scanner drift in typical scan lengths.  I'm not sure what an 11-second-per-cycle high frequency cutoff would do to the expected HRF, and there are other ways to remove unstructured noise.

As for VBM, we generally prefer to use accurate surface representations to measure the cortical anatomy directly, rather than blurring the structural image.

Tim


Glasser, Matthew

unread,
Jan 29, 2026, 8:45:15 PM (5 days ago) Jan 29
to hcp-...@humanconnectome.org, Harms, Michael

I wouldn’t recommend approaching things that way.  The data are carefully processed in the way that preserves their spatial resolution and avoids modifying the neural signal temporally, and we even provide for AABC data functional connectomes that you can explore.  All of what you propose below would represent regressions in quality.


Matt.

Glasser, Matthew

unread,
Jan 29, 2026, 9:51:39 PM (4 days ago) Jan 29
to hcp-...@humanconnectome.org, Harms, Michael

Thanks, Tim, for explaining all that. 

 

I would add that parcellation is preferred over smoothing if what you are actually after is areal level effects (because you are looking for “blobs on the brain”). 

 

Instead of VBM, we parameterize grey matter morphometrics into cortical thickness, cortical surface area, and cortical volume, which are real physical things, whereas “grey matter density” is not.


Matt.

 

From: Tim Coalson <tim.c...@gmail.com>


Reply-To: "hcp-...@humanconnectome.org" <hcp-...@humanconnectome.org>
Date: Thursday, January 29, 2026 at 5:25 PM
To: "hcp-...@humanconnectome.org" <hcp-...@humanconnectome.org>
Cc: "Harms, Michael" <mha...@wustl.edu>

Subject: Re: [hcp-users] AABC Unprocessed Imaging Files Inquiry

 

Smoothing is generally not a good processing strategy when there are other options such as group analysis or dimensionality reduction, because smoothing damages spatial specificity (additionally, group analysis of human cortical data should be done on the surface, because volume registration doesn't know how to handle the substantial individual variability in cortical folding of large portions of human cortex).  Typical volume-based smoothing in particular doesn't respect the fact that opposing sulcal banks are not directly connected, and also "wastes" some of your signal into white matter and CSF.  Our paper critiquing volume-based group cortical analysis touches on these topics: https://www.pnas.org/doi/10.1073/pnas.1801582115 .  We recommend the cifti-format versions of our cleaned data if you want to do correlation analyses that include cortex, as it naturally allows highly specific comparison or combination across subjects.

 

sICA and tICA have already removed artifacts from ("denoised") the data, and we determined that motion regression was not beneficial when using these (motion regression removed some neural signal without appreciable benefit to artifact removal).  WM and CSF signal, when using standard masks, are often contaminated with enough gray matter signal that it dominates any other effect (via partial voluming, pointspread function, and simple mask errors), resulting in accidental regression of neural gray matter signal out of the data.  tICA is a more selective method to remove global artifacts, such as respiration effects, without removing neural signal.

 

Our ICA cleanup steps are also effective at removing what motion scrubbing would remove, but more selectively (and helps "fix" artifacts that motion scrubbing would leave in as below-threshold).  We have not found bandpass filtering to be helpful, linear detrending is generally enough to deal with scanner drift in typical scan lengths.  I'm not sure what an 11-second-per-cycle high frequency cutoff would do to the expected HRF, and there are other ways to remove unstructured noise.

 

As for VBM, we generally prefer to use accurate surface representations to measure the cortical anatomy directly, rather than blurring the structural image.

 

Tim

 

On Thu, Jan 29, 2026 at 11:25AM Zach Brodnick <zbro...@gmail.com> wrote:

Hello again, 

 

Thank you both for answering my question. In regard to using the recommended data from the AABC Release 1 Data (and eventually AABC Release 2) I had a few questions about the files types. 

 

I am planning on using the "minimally preprocessed" and wanted to make sure I am using the correct files when uploading those into software like CONN Toolbox to run analyses like Seed Based Correlations + Functional Connectivity Analyses (since you can use preprocessed data in that program). Additionally, I am doing volumetric structural analyses with the T1w data.

 

Resting Data:

For the structural data I am under the assumption that data like {StudyFolder}/${Subject}/MNINonLinear/T1w_restore_brain.nii.gz file are the correct MNI space T1w images (also since it is skull stripped). 

For the resting data I am under the assumption that data like {StudyFolder}/${Subject}/MNINonLinear/Results/rfMRI_REST1_AP/rfMRI_REST1_AP_hp0_clean_rclean_tclean.nii.gz are the cleaned data (477 volumes) can be used. 

Uploading the Movement_Regressors.txt (Friston-12 Format) for the subject's 12 motion regressors 

 

What I aim to do in CONN:

Apply 4 mm FWHM spatial smoothing and, structural segmentations of the T1ws to obtain GM/WM/CSF estimates. Then for denoising, include aCompCor (5 WM + 5 CSF PCs), motion regression (12 params: 6 motion + derivatives), ART scrubbing using a 95th-percentile threshold, linear detrending, and band-pass filtering at 0.008–0.09 Hz. 

 

Volumetric Structural Data:

For the structural data I believe the {StudyFolder}/${Subject}/T1w/T1w_acpc_dc_restore.nii.gz files are the appropriate files since VBM related toolboxes (Ex: CAT12) need native space, non-segmented and non-skull stripped images. 

 

Based on reading and my understanding of the Glass et al. 2013 Minimal Preprocessing HCP Pipeline Paper, I believe these analyses can work with your data. Please correct me if I am wrong. 

 

Again, many thanks for always being so helpful!

-Zach

 

Image removed by sender. The Ohio State University 

 

Image removed by sender. The Ohio State University 

 

Zach Brodnick

Graduate Research Associate

Neuroscience Graduate Program (NGP)

The Ohio State University 

 

--
You received this message because you are subscribed to the Google Groups "HCP-Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hcp-users+...@humanconnectome.org.
To view this discussion visit https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/DS7PR20MB42075607E6E701F5872006D987B9A%40DS7PR20MB4207.namprd20.prod.outlook.com.

 

 


The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail.

--
You received this message because you are subscribed to the Google Groups "HCP-Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hcp-users+...@humanconnectome.org.

 


The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail.

--
You received this message because you are subscribed to the Google Groups "HCP-Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hcp-users+...@humanconnectome.org.
To view this discussion visit https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/7421ef8d-3727-4747-94b2-b0739cc9137an%40humanconnectome.org.

--
You received this message because you are subscribed to the Google Groups "HCP-Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hcp-users+...@humanconnectome.org.

Tim van Balkom

unread,
Jan 30, 2026, 4:53:05 PM (4 days ago) Jan 30
to HCP-Users, Glasser, Matthew
Dear Matt and colleagues,

As I was thinking about using unprocessed HCP Aging resting-state fMRI data as well I'm commenting on this thread.
I was also thinking about using HALFpipe, which is essentially running on (among others) fmriprep for rs-fMRI preprocessing. My aim is to harmonize processing of HCP-A data and other data sets (under the assumption that using different preprocessing techniques may affect outcomes between data sets).
You mention that it is clear that the methods used in HALFpipe (and thus fmriprep) are inferior to the HCP Pipelines. Could you maybe elaborate on that, like what processing steps are inferior? And do you mean that this is inferior specifically for processing HCP-A data, or in general?

A second question regarding this: in both the PreFreeSurfer pipeline and the fMRIVolume pipeline, one of the processing steps is the correction of gradient nonlinearity distortion. Is this specific to the HCP-YA sample which is scanned on the Connectom scanner, or also for the HCP-A sample? I.e., am I correct in understanding that this step was skipped in the HCP-Aging sample?

Thank you for your help!
Tim

Op zondag 4 januari 2026 om 21:09:40 UTC+1 schreef Glasser, Matthew:

Harms, Michael

unread,
Jan 30, 2026, 6:33:50 PM (4 days ago) Jan 30
to hcp-...@humanconnectome.org, Glasser, Matthew

 

Hi,

We have applied correction for gradient nonlinearity distortion, using the appropriate coefficient file for the acquisition scanner, for the pre-processing that we have applied to all HCP-related projects, include HCP-YA, HCP-A, and now AABC. This step was not ever "skipped".

 

Cheers,

-MH

 

-- 

Michael Harms, Ph.D.

-----------------------------------------------------------

Professor of Psychiatry

Washington University School of Medicine

Department of Psychiatry, Box 8134

660 South Euclid Ave.

St. Louis, MO  63110

 

 

Image removed by sender. The Ohio State University 

 

Zach Brodnick

Graduate Research Associate

Neuroscience Graduate Program (NGP)

The Ohio State University 

 

--
You received this message because you are subscribed to the Google Groups "HCP-Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hcp-users+...@humanconnectome.org.
To view this discussion visit https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/DS7PR20MB42075607E6E701F5872006D987B9A%40DS7PR20MB4207.namprd20.prod.outlook.com.

 


The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail.

--
You received this message because you are subscribed to the Google Groups "HCP-Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hcp-users+...@humanconnectome.org.

Glasser, Matthew

unread,
Jan 30, 2026, 10:17:20 PM (3 days ago) Jan 30
to Harms, Michael, hcp-...@humanconnectome.org

Here are the key differences between the HCP Pipelines and traditional analyses out there:

  1. HCP Pipelines align cortical data on surfaces and using multi-modal features such that cortical areas, functional networks, and task activations are as well aligned as they can be given topological differences.  Traditional analyses may use cortical folding (which is not correlated with cortical areas, functional networks, or task activations across most of the brain) or worse align data in the volume (which means it is not actually aligned well at all).  Please see Coalson et al., 2018 PNAS.  fMRIPrep has folding-based alignment and volume-based alignment.
  2. HCP Pipelines avoid smoothing the data whereas traditional analyses heavily smooth the data in the volume or on the surface, worsening spatial localization ability (dramatically in the case of the volume).  Please see Coalson et al., 2018 PNAS. fMRIPrep offers spatial smoothing.  In HCP we parcellate data using multi-modal cortical areas if we want a lower resolution of information (parcellation is neuroanatomically informed smoothing).
  3. HCP Pipelines use spatial independent components analysis together with a highly accurate FIX component classifier to selectively remove spatially specific artifacts from the data (head motion, physiology, scanner artifacts) without affecting the neural signal of interest.  Traditional methods do things like nuisance regression, scrubbing, and temporal filtering, which are all unselective methods and affect both artifacts and neural signal.  Please see Glasser et al., 2018; 2019 Neuroimage.  fMRIPrep has ICA-AROMA, but it uses an inferior sICA component classifier to FIX, and offers options to do inferior clean up approaches. 
  4. HCP Pipelines use temporal independent components analysis to selectively remove global respiratory artifacts from the data without affecting the neural signal of interest.  Traditional methods use global signal regression (which causes network-specific biases in connectivity) or equivalents to that or ignore the problem entirely.  Please see Glasser et al., 2018; 2019 Neuroimage.  fMRIPrep offers options to do inferior cleanup approaches.

 

The data on ConnectomeDB powered by BALSA are all processed using the above recommended approaches.  Thus, there are major differences in both spatial localization quality and denoising quality between HCP Pipeline processed data and data processed using other approaches that don’t consider the important lessons learned from the HCP.  It continues to be remarkable how some investigators will take HCP data and ruin its advantages in spatial localization and temporal resolution by applying non-HCP methods to it.  As I said before, we remain interested in new methods that do better than what we have already done (and spend a lot of effort trying to improve upon existing HCP processing methods), but are not interested methods that reduce data quality because they get #1-4 above wrong.

 

Matt.

Tim van Balkom

unread,
Feb 2, 2026, 6:48:06 AM (yesterday) Feb 2
to HCP-Users, Glasser, Matthew, Harms, Michael
Dear Michael and Matt,

Thank you very much for your responses. This is really helpful.

Kind regards,
Tim

Op zaterdag 31 januari 2026 om 04:17:20 UTC+1 schreef Glasser, Matthew:
Reply all
Reply to author
Forward
0 new messages