Group-Level Z-Stat

389 views
Skip to first unread message

Austin Cooper

unread,
Nov 23, 2023, 9:29:16 AM11/23/23
to HCP-Users
Hi there!

I'm having difficulty with computing group-level activation maps.

I only have 1 fMRI file for each subject, and thus I have been able to successfully run 1st level FEAT analyses, which results in zstat#.ptseries.nii files, where # is the respective contrast #. 

I've tried making my own higher-level analysis .fsf file through FSL by adding alll subjects folder paths to their respective zstat#.ptseries.nii files but FSL does not recognize them as being sufficient. 

Am I required to run the 2nd level task analysis through the TaskfMRIBatch? I thought this was only used if there were multiple separate fMRI files of the same task type for each subject.

May you please direct me as to the right direction for computing these group-level activation maps?

Warm regards from Montreal,
Austin

Glasser, Matt

unread,
Nov 23, 2023, 10:49:50 AM11/23/23
to hcp-...@humanconnectome.org

I’m honestly not sure why the task level 3 (group) analysis script is not in the HCP Pipelines repo.  I had one in my working copy of the HCP Pipelines, which I have just committed under the task analysis folder.  Consider it beta, as I’m pretty sure it was working for me, but it hasn’t been public before.  If you have only one run per subject, you skip level 2 and go straight to level 3. 

 

Level3 .fsf files have the attached (example level 3 fsf for 28 subjects) sort of pattern (change “28” in that file to however many subjects you have and then find the two sections that have 1-28 entries and change them to 1-however many subjects you have).

 

Hope this helps,

 

Matt.

--
You received this message because you are subscribed to the Google Groups "HCP-Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hcp-users+...@humanconnectome.org.
To view this discussion on the web visit https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/5ff3f85e-e03e-4291-8899-70fc5dcff7ccn%40humanconnectome.org.

 


The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail.

design_template28.fsf

Austin Cooper

unread,
Nov 23, 2023, 4:14:37 PM11/23/23
to HCP-Users, glas...@wustl.edu
Great! Thank you Matt!

Regarding the level_3 bash scrip you uploaded, what is the group_folder variable? Is this where I plan to direct the group-level results?

Glasser, Matt

unread,
Nov 23, 2023, 4:17:56 PM11/23/23
to Austin Cooper, HCP-Users

Right. Usually something like:

 

${StudyFolder}/${GroupName}

 

At the same level as

 

${StudyFolder}/${Subject}

 

It does not have to exist. 

Message has been deleted
Message has been deleted

Glasser, Matt

unread,
Nov 24, 2023, 1:46:20 PM11/24/23
to Austin Cooper, HCP-Users

It could be that you need to make summary directories at level 1 for this to work. 

Matt.

 

From: Austin Cooper <austin....@gmail.com>
Date: Friday, November 24, 2023 at 12:29 PM
To: HCP-Users <hcp-...@humanconnectome.org>
Cc: Austin Cooper <austin....@gmail.com>, "Glasser, Matt" <glas...@wustl.edu>
Subject: Re: [hcp-users] Group-Level Z-Stat

 

 

I decided to just convert the design.con to Contrasts.txt and it alleviates the error. I hope this is the right method.

 

Now I receive this as the error before the script stops: "Can't find key fmri(level)"

 

Any ideas?

On Friday, 24 November 2023 at 11:53:20 UTC-5 Austin Cooper wrote:

Got it! Thank you!

 

I now receive an error as such: 

 

Fri Nov 24 11:03:17 EST 2023
cat: /project/6001995/Shmuel_Mendola/Data/MGH/HCPNaming//01/MNINonLinear/Results/all_fMRI_data/all_fMRI_data_hp200_s2_level1_MSMAll_hp0_clean_HCP_MMPv1.feat/Contrasts.txt: No such file or directory

 

My first level analysis did not produce a file named "contrasts.txt", though it has run to completion successfully. How is this file supposed to be set?

Austin Cooper

unread,
Nov 24, 2023, 1:51:45 PM11/24/23
to HCP-Users, glas...@wustl.edu, Austin Cooper
Thanks Matt.

I'm not exactly sure what you mean, but it seems the 3rd level expects outputs from the second level taskAnalysisBatch.

In particular it expected *.feat/Contrasts.txt which seems to be set in level 2 (which I ended up just making myself by converting the design.con to Contrasts.txt). It also expects cope#.feat files within the overarching *.feat/ParcellatedStats/ folde, and I don't have this; I only have zstat*.ptseries.nii files in the ParcellatedStats folder.

This is the current error:
----- ANALYSIS = ParcellatedStats -----
COPE 1, /ContrastName1, Preparing inputs, Fri Nov 24 13:36:44 EST 2023
ERROR: /project/6001995/Shmuel_Mendola/Data/MGH/HCPNaming/01/MNINonLinear/Results/all_fMRI_data/all_fMRI_data_hp200_s2_level1_MSMAll_hp0_clean_HCP_MMPv1.feat/ParcellatedStats/cope1.feat/mask.ptseries.nii does not exist

Is there a workaround?

Glasser, Matt

unread,
Nov 24, 2023, 2:20:54 PM11/24/23
to Austin Cooper, HCP-Users

Use the --summaryname option on the Level1+2 task analysis script to make level 2-like folders.

Austin Cooper

unread,
Nov 27, 2023, 3:12:44 PM11/27/23
to HCP-Users, glas...@wustl.edu, Austin Cooper
Thanks Matt!

I've been troubleshooting this summary script and think it is nearly set up, though i now get an error that I'm uncertain how to fix. This error stems from the makeSubjectTaskSummary.sh script, particularly this segment of code: 

if [[ "$Analysis" = "GrayordinatesStats" || "$Analysis" = "ParcellatedStats" ]] ; then
for nifti_in in $tmpdir/{mask,tdof_t1}.nii.gz; do
cifti_template=$( ls ${LevelOneFEATDir}/${Analysis}/pe*.${Extension} | head -1 )
cifti_out=$( echo $nifti_in | sed -e "s|nii.gz|${Extension}|" )
${CARET7DIR}/wb_command -cifti-convert -from-nifti ${nifti_in} ${cifti_template} ${cifti_out} -reset-timepoints 1 1
done
fi

and the error is:

Using Level1
mkdir: created directory '/project/6001995/Shmuel_Mendola/Data/MGH/HCPNaming/01/MNINonLinear/Results/all_fMRI_data/all_fMRI_data_hp200_s2_MSMAll_hp0_clean_HCP_MMPv1_subjectSummary.feat/ParcellatedStats/tmp'
ls: cannot access '/project/6001995/Shmuel_Mendola/Data/MGH/HCPNaming/01/MNINonLinear/Results/all_fMRI_data/all_fMRI_data_hp200_s2_level1_MSMAll_hp0_clean_HCP_MMPv1.feat/ParcellatedStats/pe*.ptseries.nii': No such file or directory



Do you happen to know what the nature of the file that it is expecting is, and from what command/stage should this file be produced? 

Glasser, Matt

unread,
Nov 27, 2023, 8:39:25 PM11/27/23
to Austin Cooper, HCP-Users

Unfortunately, I didn’t write this part of the code any the person who did left.  If you can sort it out, we would love to get the patch to fix it.  You could try running to task analysis on HCP data first with one or two runs to see how it works and then see if you can figure out the issue.

Austin Cooper

unread,
Nov 28, 2023, 9:35:47 AM11/28/23
to HCP-Users, glas...@wustl.edu, Austin Cooper
Indeed that is quite unfortunate.

My immediate question is whether the script is flawed or not. Have you ever seen a file/folder that matches the outline of pe*.ptseries.nii? Could it be, instead, that it should be looking for cope*.ptseries.nii?

Matt, have you run this makeSuubjectTaskSummary.sh script before? Is there any chance you have the outputs from this such that we can maybe answer this question? 

Anyways, yes, I hope to help to solve the problem and thank you for your quick responses! :)


Austin

Glasser, Matt

unread,
Nov 28, 2023, 6:17:53 PM11/28/23
to hcp-...@humanconnectome.org, Austin Cooper

Actually, I have run it in this usecase before.  I have both pe and cope in the parcellated folder for level1.  In the Summary directory, I have the appropriate files for level3 (including the files that triggered the initial errors that you reported), I believe; however, I did not ever run Level3 in this situation (at least I would not have gotten the error that you reported though).  Thus, I am at a bit of a loss as to the issues you are having.  Perhaps do as I suggested and run this on some HCP data to prove it can work on your end?  Here is what an example run looks like for me:

 



Matt.

Austin Cooper

unread,
Nov 30, 2023, 5:05:26 PM11/30/23
to HCP-Users, glas...@wustl.edu, Austin Cooper
Interesting. thank you for sharing, Matt.

I just compared my level 1 .fsf file to the standard HCP ones and saw that there was a difference which is likely the cause. HCP .fsf files have:

# Carry out post-stats steps?
set fmri(poststats_yn) 1


And I had:

# Carry out post-stats steps?
set fmri(poststats_yn) 0


I hope this solves the issue. 

Will post when I know.



Warm regards,
Austin
taskfMRI_processing_control{01}.out

Glasser, Matt

unread,
Nov 30, 2023, 6:28:05 PM11/30/23
to Austin Cooper, HCP-Users

If you aren’t getting errors when you run the task fMRI analysis pipeline and you aren’t getting the expected outputs, an issue with your .fsf is likely, because that encodes all the stats.


Matt.

 

From: Austin Cooper <austin....@gmail.com>
Date: Thursday, November 30, 2023 at 4:05 PM
To: HCP-Users <hcp-...@humanconnectome.org>
Cc: "Glasser, Matt" <glas...@wustl.edu>, Austin Cooper <austin....@gmail.com>
Subject: Re: [hcp-users] Group-Level Z-Stat

 

 

Interesting. thank you for sharing, Matt.

 

I just compared my level 1 .fsf file to the standard HCP ones and saw that there was a difference which is likely the cause. HCP .fsf files have:

 

# Carry out post-stats steps?
set fmri(poststats_yn) 1

 

And I had:

 

# Carry out post-stats steps?
set fmri(poststats_yn) 0

 

I hope this solves the issue. 

 

Will post when I know.

 

 

 

Warm regards,

Austin

 

On Tuesday, 28 November 2023 at 18:17:53 UTC-5 glas...@wustl.edu wrote:

Actually, I have run it in this usecase before.  I have both pe and cope in the parcellated folder for level1.  In the Summary directory, I have the appropriate files for level3 (including the files that triggered the initial errors that you reported), I believe; however, I did not ever run Level3 in this situation (at least I would not have gotten the error that you reported though).  Thus, I am at a bit of a loss as to the issues you are having.  Perhaps do as I suggested and run this on some HCP data to prove it can work on your end?  Here is what an example run looks like for me:

 

Image removed by sender.

Matt.

Austin Cooper

unread,
Dec 2, 2023, 2:00:37 PM12/2/23
to HCP-Users, glas...@wustl.edu, Austin Cooper
Hey Matt! Any chance you have the affiliated output file from a successful taskfMRI level 1 analysis? I'd like to compare it to mine. I compared my .fsf file and it is no different than the HCP level 1 .fsf files or from another .fsf file of mine that I used on the same data in a volumetric manner (which did end up receiving all the pe*.nii.gz files). 

So I'm doubting it to be an issue with my .fsf file, but it'd be nice to see an output file that shows what a fully completed taskfMRIlevel1 script should output.

Warm regards,
Austin

On Tuesday, 28 November 2023 at 18:17:53 UTC-5 glas...@wustl.edu wrote:

Glasser, Matt

unread,
Dec 2, 2023, 2:04:42 PM12/2/23
to Austin Cooper, HCP-Users

That is the second opened folder towards the bottom.  A few zstats are cut off at the bottom, but otherwise that is a complete set of outputs.

Austin Cooper

unread,
Dec 2, 2023, 3:25:44 PM12/2/23
to HCP-Users, glas...@wustl.edu, Austin Cooper
Ah, sorry for the lack of clarity in my request, Matt.

Rather, I'm wondering if you have an output log, such as the one I include here. I'd like to compare one that processes all the pe* files, to mine, which does not.

Also, I wonder if you can clarify what you have placed in the .fsf for referencing the preprocessed fMRI data, particularly this line in the .fsf file:

# 4D AVW data or FEAT directory (1)
set feat_files(1) "WHAT ARE YOU PLACING HERE???"


I wonder if this could be the problem. 

And lastly, I see that when running the Task1fMRIAnalysis, there are 2 new .ptseries.nii files produced within the folder where my processed fMRI data is, is this to be expected? Right now, as I have it, the data I'm putting into the .fsf file for analysis is titled all_fMRI_data_Atlas_hp0_clean.dtseries.nii, whilst the newly computed files that result from running the  Task1fMRIAnalysis are all_fMRI_data_Atlas_hp200_s2_MSMAll_hp0_clean_HCP_MMPv1.ptseries.nii and all_fMRI_data_Atlas_s2_MSMAll_hp0_clean_HCP_MMPv1.ptseries.nii. I want to check to see whether this raises any red flags for you?

Thanks for your timely help. It's truly appreciated!


Austin

taskfMRI_processing_control{01}.out

Glasser, Matt

unread,
Dec 2, 2023, 3:39:27 PM12/2/23
to Austin Cooper, HCP-Users

I don’t think so I probably deleted it.  That said, indeed it looks like a bunch of outputs are not created at all. 

 

Out of curiosity, what happens if you attempt to run FEAT itself with your current .fsf on a subject’s NIFTI files? 

 

The .fsf doesn’t actually matter for most of the mechanics of the task analysis pipeline but instead it is there to store the task design.  Do the design.pngs that get spit out look correct?  Can you paste them here?

Image removed by sender.

Matt.

Austin Cooper

unread,
Dec 4, 2023, 1:17:10 PM12/4/23
to HCP-Users, glas...@wustl.edu, Austin Cooper
Hey Matt!

When I run the .fsf file external to the HCP TaskAnalysis pipeline through FEAT then it runs just fine. The .fsf files is the same except for the change in paths (HCP styled uses relative paths and the other one uses absolute paths), and other than that all is the same, even including the fMRI file being referenced.... is this proper?

The design.png created via the HCP pipeline and via the method free from the HCP pipeline are the exact same (attached). 

I also again attach the output file from when I run the  HCP TaskAnalysis pipeline for my subject 01 along with the two .fsf files mentioned above.

Really perplexed that the script runs through without error, and that it takes 2 minutes to run through. 

taskfMRI_processing_control{01}.out
hcp_styled_for_FEAT.fsf
design.png
all_fMRI_data_hp200_s2_level1.fsf

Glasser, Matt

unread,
Dec 4, 2023, 1:36:05 PM12/4/23
to hcp-...@humanconnectome.org, Austin Cooper

So to summarize:

  1. The .fsf works fine in FEAT with volume files.
  2. The same .fsf does not produce all the outputs in the HCP Pipelines with parcellated CIFTI files.
  3. Can you also check if there is an issue with dense CIFTI files (.dtseries.nii)?
  4. What does the contrast matrix .png look like?  Is it also the same?
  5. You are using quite an old version of FSL, so upgrading to the latest could possibly be helpful. 

 

Parcellated is expected to be super fast because it is a tiny amount of data being run through the GLM.

Error! Filename not specified.

Matt.

Austin Cooper

unread,
Dec 4, 2023, 2:46:19 PM12/4/23
to HCP-Users, glas...@wustl.edu, Austin Cooper
Hey Matt, yes that all sounds right.

I've tried running the analysis as a dense based analysis (i.e. ParcellationList="NONE" instead of how it was ParcellationList="HCP_MMPv1"). This analysis is still running and I will update when it has run through to completion. The design.png and design_cov.png are the exact same as before.

Regarding the FSL version, it says 6.00 in the .fsf, but my loaded module version is 6.0.4, so IDK why these files say that.

And regarding this:
  1. Can you also check if there is an issue with dense CIFTI files (.dtseries.nii)?
What is the easiest way to do this? I've successfully opened it, but am not certain which other files I need to load in order to visualize it, I'm still naive to this data format.

Glasser, Matt

unread,
Dec 4, 2023, 4:41:29 PM12/4/23
to hcp-...@humanconnectome.org, Austin Cooper

The current version of FSL is 6.0.7.7 released a couple of weeks ago.  You have 6.0.4, which was released 3 years ago. 

 

The main thing to check is whether the appropriate files are generated (and not just zstats). 

 

You need some surfaces and volumes to display the .dtseries.nii files.  Perhaps Jenn can point you to the Connectome Workbench tutorial and some sample .spec files for visualization. 

Elam, Jennifer

unread,
Dec 4, 2023, 4:53:37 PM12/4/23
to hcp-...@humanconnectome.org, Austin Cooper
Hi Austin,
The Workbench tutorial dataset and tutorial PDF can be downloaded from BALSA: https://balsa.wustl.edu/study/kN3mg
If you haven't registered for BALSA before you will have to do that and sign the HCP Data Use Terms before you can download the data.

If you are following the tutorial, skip the loading a .spec file part and go to the loading a scene file step. We inadvertently left the spec file part in without providing a spec file in the dataset. You can probably get a feel for most of the basic parts of Workbench by going through the first 3-4 scenes.

The tutorial data surface files (*.surf.gii) can be used as a "base" for you to load your brain maps from your work in as layers for visualization. Alternatively, you can use the S1200 surfaces in this dataset: https://balsa.wustl.edu/reference/6V6gD

Best,
Jenn

Jennifer Elam, Ph.D.
Scientific Outreach, Human Connectome Project
Washington University School of Medicine
Department of Neuroscience, Box 8108
660 South Euclid Avenue
St. Louis, MO 63110
314-362-9387
el...@wustl.edu
www.humanconnectome.org



From: Glasser, Matt <glas...@wustl.edu>
Sent: Monday, December 4, 2023 3:41 PM
To: hcp-...@humanconnectome.org <hcp-...@humanconnectome.org>
Cc: Austin Cooper <austin....@gmail.com>

Subject: Re: [hcp-users] Group-Level Z-Stat
 

* External Email - Caution *

Austin Cooper

unread,
Dec 4, 2023, 5:16:34 PM12/4/23
to HCP-Users, el...@wustl.edu, Austin Cooper
Thanks Jenn!

I've loaded my spec files, both of them (01.MSMAll.164k_fs_LR.wb.spec) and (01.164k_fs_LR.wb.spec), though when I try to add my overlay it says that there are incompatible amount of nodes.

Does this highlight that several steps of my preprocessing are incompatible with themselves?

Austin Cooper

unread,
Dec 4, 2023, 5:17:40 PM12/4/23
to HCP-Users, Austin Cooper, el...@wustl.edu
In particular, my overlay is my .dtseries.nii file and the error message is included here.
incompatible_nodes.png

Glasser, Matt

unread,
Dec 4, 2023, 6:59:23 PM12/4/23
to hcp-...@humanconnectome.org, Austin Cooper, Elam, Jennifer

Sounds like you need 32k surfaces.

Austin Cooper

unread,
Dec 5, 2023, 3:52:38 PM12/5/23
to HCP-Users, glas...@wustl.edu, Austin Cooper
To confirm, I opened the .dtseries.nii data and it all looks good, so shouldn't be an issue with the dense cifti files.

To be clear, the pe*.nii.gz files and all other supplementary files should be produced from the following code section found in the TaskfMRILevel1.sh script? Seems quite odd if it is that simple and it's not getting produced. 

##### RUN film_gls (GLM ANALYSIS ON LEVEL 1) #####

# Run CIFTI Dense Grayordinates Analysis (if requested)
if $runDense ; then
    # Dense Grayordinates Processing
    log_Msg "MAIN: RUN_GLM: Dense Grayordinates Analysis"
    #Split into surface and volume
    log_Msg "MAIN: RUN_GLM: Split into surface and volume"
    ${CARET7DIR}/wb_command -cifti-separate-all ${ResultsFolder}/${LevelOnefMRIName}/${LevelOnefMRIName}_Atlas${TemporalFilterString}${SmoothingString}${RegString}${ProcSTRING}${LowPassSTRING}.dtseries.nii -volume ${FEATDir}/${LevelOnefMRIName}_AtlasSubcortical${TemporalFilterString}${SmoothingString}.nii.gz -left ${FEATDir}/${LevelOnefMRIName}${TemporalFilterString}${SmoothingString}${RegString}${ProcSTRING}${LowPassSTRING}.atlasroi.L.${LowResMesh}k_fs_LR.func.gii -right ${FEATDir}/${LevelOnefMRIName}${TemporalFilterString}${SmoothingString}${RegString}${ProcSTRING}${LowPassSTRING}.atlasroi.R.${LowResMesh}k_fs_LR.func.gii

    #Run film_gls on subcortical volume data
    log_Msg "MAIN: RUN_GLM: Run film_gls on subcortical volume data"
    film_gls --rn=${FEATDir}/SubcorticalVolumeStats --sa --ms=5 --in=${FEATDir}/${LevelOnefMRIName}_AtlasSubcortical${TemporalFilterString}${SmoothingString}.nii.gz --pd=${DesignMatrix} --con=${DesignContrasts} ${ExtraArgs} --thr=1 --mode=volumetric
    rm ${FEATDir}/${LevelOnefMRIName}_AtlasSubcortical${TemporalFilterString}${SmoothingString}.nii.gz

    #Run film_gls on cortical surface data
    log_Msg "MAIN: RUN_GLM: Run film_gls on cortical surface data"
    for Hemisphere in L R ; do
        #Prepare for film_gls  
        log_Msg "MAIN: RUN_GLM: Prepare for film_gls"
        ${CARET7DIR}/wb_command -metric-dilate ${FEATDir}/${LevelOnefMRIName}${TemporalFilterString}${SmoothingString}${RegString}${ProcSTRING}${LowPassSTRING}.atlasroi.${Hemisphere}.${LowResMesh}k_fs_LR.func.gii ${DownSampleFolder}/${Subject}.${Hemisphere}.midthickness.${LowResMesh}k_fs_LR.surf.gii 50 ${FEATDir}/${LevelOnefMRIName}${TemporalFilterString}${SmoothingString}${RegString}${ProcSTRING}${LowPassSTRING}.atlasroi_dil.${Hemisphere}.${LowResMesh}k_fs_LR.func.gii -nearest

        #Run film_gls on surface data
        log_Msg "MAIN: RUN_GLM: Run film_gls on surface data"
        film_gls --rn=${FEATDir}/${Hemisphere}_SurfaceStats --sa --ms=15 --epith=5 --in2=${DownSampleFolder}/${Subject}.${Hemisphere}.midthickness.${LowResMesh}k_fs_LR.surf.gii --in=${FEATDir}/${LevelOnefMRIName}${TemporalFilterString}${SmoothingString}${RegString}${ProcSTRING}${LowPassSTRING}.atlasroi_dil.${Hemisphere}.${LowResMesh}k_fs_LR.func.gii --pd=${DesignMatrix} --con=${DesignContrasts} ${ExtraArgs} --mode=surface
        rm ${FEATDir}/${LevelOnefMRIName}${TemporalFilterString}${SmoothingString}${RegString}${ProcSTRING}${LowPassSTRING}.atlasroi_dil.${Hemisphere}.${LowResMesh}k_fs_LR.func.gii ${FEATDir}/${LevelOnefMRIName}${TemporalFilterString}${SmoothingString}${RegString}${ProcSTRING}${LowPassSTRING}.atlasroi.${Hemisphere}.${LowResMesh}k_fs_LR.func.gii
    done

Another update which is curious: 

I removed the lines of code in the dense task fMRI analysis part of the script that rm -r the SubcorticalVolumeStats, R_SurfaceStats, and L_SurfaceStats folders once the GrayordinateStats folder has been created. When I open up both the SurfaceStats folders I see that all supplementary files (tstat*.func.gii, cope*.func.gii, varcope*.func.gii, pe*.func.gii, etc) are all present. They, oddly enough, are not there for the SuborticalVolumeStats or the GrayordinateStats folders. 

So these files are at least getting produced on the surface level, but are not translating to grayordinates (maybe because they aren't produced for subcortical?)

Really hoping I can solve this issue in the near future.

Warm regards,
Austin

Austin Cooper

unread,
Dec 6, 2023, 4:50:56 PM12/6/23
to HCP-Users, Austin Cooper, glas...@wustl.edu
Matt, I ran the pipeline with data from an HCP subject and it worked, producing all the files that I'm yearning for.

I'm wondering if there is something wrong with the subcortical data of mine, since as mentioned above, it is the subcortical intermediate file that does not receive the pe*.nii.gz files and all other files that appear for the intermediate surface folder. 
The  thing is that when I visualize my .dtseries.nii data, it shows both surface and subcortical data, much like that of the HCP data, so it is hard for me to determine why things run through for HCP data and not mine. I include an image that displays this. I do wonder the discrepancy between intensity levels, where there is more uniformity with the processed HCP data compared to mine.

So I've determined that it is indeed likely an issue with my data, though I can't seem to determine what it is. I determined this since the same pipeline was run on both datasets and only worked properly on the HCP dataset.

myDataOnTop_HCPDataOnBottom.png

Glasser, Matt

unread,
Dec 6, 2023, 8:28:16 PM12/6/23
to Austin Cooper, HCP-Users

Is your input fMRI data demeaned?  That doesn’t work with FEAT.

 

So you have found that subcortical doesn’t work for you or parcellated, which both use a volume workflow (in a fake volume for parcellated) whereas surface uses a somewhat different workflow.

Austin Cooper

unread,
Dec 7, 2023, 11:09:51 AM12/7/23
to HCP-Users, glas...@wustl.edu, Austin Cooper
Ah, this is likely the issue since I am using the concatenated data output from the preprocessing steps.

You said the following about the concatenated data in a previous thread: 

"Each fMRI run is normalized to a grand mean 10000.  Runs are demeaned before concatenation.  Unstructured noise is equalized across runs using variance normalization. Actually, the problem with concatenating across runs for task fMRI analyses of the sort you mentioned comes from the demeaning step."

So, am I right in saying, it is impossible to use this preprocessed concatenated data for the taskfMRI analysis, and rather, I'll have to run FEAT on all individual fMRI runs, or even concatenate them myself without demeaning them? And the latter option is likely problematic since having the mean within each run may be adding scan-to-scan differences that are not helpful?

Glasser, Matt

unread,
Dec 7, 2023, 11:19:10 AM12/7/23
to hcp-...@humanconnectome.org, Austin Cooper

Okay I didn’t connect the two questions.  So there are major problems trying to compare across runs because there is no guarantee that the means will be equivalent (due to MR Physics reasons).  I think I said that in the past.  This is why we demean the images before concatenating them, so that we are not affected by such issues.  Also, if you are comparing conditions that are only present within the separate runs, some or all of your effect of interest could end up in the image mean. 

 

Separately from all of that, you need to have a mean image to use FILM properly with the volume workflow.  You could add the grand mean (across all the runs) back into the images to get it to run; however, that does not address the problems I mentioned above for your specific analysis.  Perhaps a better strategy would be to run ICA on the concatenated data and look for components that are present in only one kind of run.  If you have enough data, temporal ICA is quite good at this (and will pull out things like task block designs; see Glasser et al., 2018 Neuroimage).

Reply all
Reply to author
Forward
0 new messages