Input to FSL

202 views
Skip to first unread message

Asa Borzabadi

unread,
Jun 4, 2022, 9:20:23 AM6/4/22
to hcp-...@humanconnectome.org
Dear experts,

I have a question about the input to the FSL step.
I have done the GenericVolume, GenericSurface, IcaFIx, PostFIx, MAMAll, and DeDrift scripts. which culminated in the creation of these files:
(outputs of DeDrifft)image.png
Then I started looking into the generate_level1_fsf files and I found the following explanation: 
Image file for which to produce an FSF file will be expected to be found at: 
<study-folder>/<subject-id>/MNINonLinear/Results/<task-name>/<task-name>.nii.gz"


My question is if the input for FSL should be <task-name>_hp0_clean.nii.gz
instead of <task-name>.nii.gz. I suppose that this file: <task-name>_hp0_clean.nii.gz, which is the output of DeDrift could be a better option.

Could you please help me with this?

Regards,
Asa Farahani

Asa Borzabadi

unread,
Jun 4, 2022, 9:21:46 AM6/4/22
to hcp-...@humanconnectome.org
In general, what should be the input file to the FSL? 

Harms, Michael

unread,
Jun 4, 2022, 9:36:58 AM6/4/22
to hcp-...@humanconnectome.org



See the --procstring argument in the TaskAnalysis.sh script currently in the master branch.  This functionality isn’t part of the last release (v.4.3.0).

 

FYI: The changes to TaskAnalysis in the master branch have not been extensively tested.

 

Cheers,

-MH

 

-- 

Michael Harms, Ph.D.

-----------------------------------------------------------

Associate Professor of Psychiatry

Washington University School of Medicine

Department of Psychiatry, Box 8134

660 South Euclid Ave.                        Tel: 314-747-6173

St. Louis, MO  63110                          Email: mha...@wustl.edu

 

From: Asa Borzabadi <asa.bo...@gmail.com>
Reply-To: "hcp-...@humanconnectome.org" <hcp-...@humanconnectome.org>
Date: Saturday, June 4, 2022 at 8:21 AM
To: "hcp-...@humanconnectome.org" <hcp-...@humanconnectome.org>
Subject: [hcp-users] Re: Input to FSL

 

* External Email - Caution *

In general, what should be the input file to the FSL? 

 

On Sat, Jun 4, 2022 at 9:20 AM Asa Borzabadi <asa.bo...@gmail.com> wrote:

Dear experts,

 

I have a question about the input to the FSL step.

I have done the GenericVolume, GenericSurface, IcaFIx, PostFIx, MAMAll, and DeDrift scripts. which culminated in the creation of these files:

(outputs of DeDrifft)

Then I started looking into the generate_level1_fsf files and I found the following explanation: 

Image file for which to produce an FSF file will be expected to be found at: 
<study-folder>/<subject-id>/MNINonLinear/Results/<task-name>/<task-name>.nii.gz"

 

My question is if the input for FSL should be <task-name>_hp0_clean.nii.gz

instead of <task-name>.nii.gz. I suppose that this file: <task-name>_hp0_clean.nii.gz, which is the output of DeDrift could be a better option.

 

Could you please help me with this?

 

Regards,

Asa Farahani

--
You received this message because you are subscribed to the Google Groups "HCP-Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hcp-users+...@humanconnectome.org.
To view this discussion on the web visit https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/CA%2Bs6gvmnDC%2BeiyRkETzgB7tOGpydz5bJ0q12jAWwUwme%3DxLwyA%40mail.gmail.com.

 


The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail.

Asa Borzabadi

unread,
Jun 4, 2022, 10:29:35 PM6/4/22
to hcp-...@humanconnectome.org
Thank you for the explanation. 

Actually, my goal is to load one of the currently developed fsf files in FSL and then I probably need to modify them.
Hence, I do not want to use any scripts from now on (after the DeDrift step). 

My question would be which file (generated after DeDrift step) is the most appropriate file to continue with, I went through the scripts, and it made me come to the conclusion that a file with such a naming might be the one that I should use: 

${StudyFolder}/${Subject}/MNINonLinear/Results/tfMRI_VISION1_AP/tfMRI_VISION1_Atlas_AP_MSMALL_hp0_clean.dtseries.nii

Could you confirm?

Regards,
Asa Farahani



Harms, Michael

unread,
Jun 5, 2022, 12:16:51 AM6/5/22
to hcp-...@humanconnectome.org

Yes, MSMAll_hp0_clean.dtseries.nii would be good to use.

Glasser, Matt

unread,
Jun 5, 2022, 11:26:44 AM6/5/22
to hcp-...@humanconnectome.org

You probably would want to use the HCP’s TaskAnalysis pipeline rather than using FEAT directly to do the analysis, but you would still use FEAT to create the appropriate .fsf.

 

Matt.

Asa Borzabadi

unread,
Jun 6, 2022, 11:56:07 AM6/6/22
to hcp-...@humanconnectome.org
I thought that maybe I can use the Feat GUI directly, input the cleaned dtseries and run the FEAT to get the Contrast maps.

I just figured out that it is not as easy as I thought, It seems like the dtseries.nii files are not well recognized by Feat GUI ( both in terms of TR and Volume Number, even when I changed the headers;  inconsistency in the Number of Volumes stayed intact). In addition, looking into the scripts I also noticed some additional processing as well. Hence I think it is wise to use the TaskAnalysis batch files and do not try to deviate from them.






Harms, Michael

unread,
Jun 6, 2022, 12:14:08 PM6/6/22
to hcp-...@humanconnectome.org

Please let us know how it goes using the version in the master branch.  As I said, the current version in the master is relatively new, and hasn’t been extensively tested.

 

thx

Asa Borzabadi

unread,
Jun 8, 2022, 9:47:56 AM6/8/22
to hcp-...@humanconnectome.org
I really am grateful for your help. 
I tried the first level analysis, it worked. I got some activation on the cortex, but when it comes to the subcortical regions it seems too noisy.

However, I still am not sure if the variables that I am setting are optimal, and even in some cases I am not sure what they should refer to. 
I have included some questions here for clarification and I would really be grateful if you could help me.


for TaskName in ${TaskNameList}
do
        LevelOneTasksList="tfMRI_${TaskName}_AP" #Delimit runs with @ and tasks with space
        LevelOneFSFsList="tfMRI_${TaskName}_AP" #Delimit runs with @ and tasks with space
        LevelTwoTaskList="NONE" #Space delimited list
        LevelTwoFSFList="NONE" #Space delimited list

        SmoothingList="2" #Space delimited list for setting different final smoothings.  2mm is no more smoothing (above minimal preprocessing pipelines grayordinates smoothing).  Smoothing is added onto minimal preprocessing smoothing to reach desired amount
         %%%%% (1) Do you think increasing the SmoothingList can be of any help? in terms of finding something meaningful in subcortical regions? 
        LowResMesh="32" #32 if using HCP minimal preprocessing pipeline outputs
        GrayOrdinatesResolution="2" #2mm if using HCP minimal preprocessing pipeline outputs
        OriginalSmoothingFWHM="2" #2mm if using HCP minimal preprocessing pipeline outputes
        Confound="NONE" #File located in ${SubjectID}/MNINonLinear/Results/${fMRIName} or NONE
        %%%%% (2) What should Confound variable point to? I tried "/100102/MNINonLinear/Results/tfMRI_VISION1" but it gave me errors, I think it should point to a specific file or folder but what is the name of that file? 
        HighpassFilter="200" #Use 2000 for linear detrend, 200 is default for HCP task fMRI, NONE to turn off
        %%%%% (3)  This is the high pass filtering used by fsl right?  Should we modify it according to the task paradigm that we have? 
        VolumeBasedProcessing="NO" #YES or NO. CAUTION: Only use YES if you want unconstrained volumetric blurring of your data, otherwise set to NO for faster, less biased, and more senstive processing (grayordinates results do not use unconstrained volumetric blurring and are always produced).
        %%%%% (4)  Is my understanding about this variable correct? if it is set to be "Yes" we will have a volume based processing in the NIFTI space, if set to "NO", we have two streams of processing one for surface preprocessing and one for subcortical processing. If I am wrong, can you correct me?
        RegNames="MSMAll_Test" # Use NONE to use the default surface registration 
        ProcSTRING="hp0_clean" #Any preprocessing beyond CIFTI mapping and surface registration, e.g. spatial and temporal ICA cleanup or NONE
        %%%%% (5)  According to the output of DeDrift part, which is as in the figure below, the naming of variables called "RegNames" and "ProcSTRING" should change as above, correct? 
image.png

        ParcellationList="NONE" # Use NONE to perform dense analysis, non-greyordinates parcellations are not supported because they are not valid for cerebral cortex.  Parcellation superseeds smoothing (i.e. smoothing is done)
        %%%%% (6) now that I care both about cortex and subcortical areas, what should it point to? Is there any specific file or value? 
        ParcellationFileList="NONE" # Absolute path the parcellation dlabel file.  Also accepts NONE when the ptseries already exists and does not need to be generated.
        %%%%% (7)  What should it exactly be? I do not understand what these parcellationList and parcellationFileList variables are and what they exactly should be...

I really am sorry for the long email and too many questions that I have made.

Thank you so much.

Best regards and warmest wishes,
Asa Farahani


Glasser, Matt

unread,
Jun 8, 2022, 10:55:53 AM6/8/22
to hcp-...@humanconnectome.org
  1. You can use up to 4mm FWHM of smoothing to see if that helps.  Using more than that really starts to blur across brain areal boundaries (Coalson et al., 2018 PNAS).  Alternatively, you can use a parcellated analysis, which is a neuroanatomically informed approach to “smoothing.”  A good parcellation including subcortical structures is available here: https://balsa.wustl.edu/file/87B9N
  2. This generally should not be used.  It is a better idea to properly denoise the fMRI data before running the task GLM using, e.g., multi-run FIX.
  3. Highpass filtering generally mildly improves statistics by suppressing some long period spontaneous fluctuations. 
  4. No is what you want unless you are not using any smoothing at all and are doing only individual subject analyses. 
  5. You should use files with MSMAll in their names (e.g., ‘_Atlas_MSMAll_hp0_clean’).
  6. You can set this to HCP_MMPv1.
  7. See above.

 

No need to apologize for asking questions. 

 

Matt.

 

From: Asa Borzabadi <asa.bo...@gmail.com>
Reply-To: "hcp-...@humanconnectome.org" <hcp-...@humanconnectome.org>
Date: Wednesday, June 8, 2022 at 8:47 AM
To: "hcp-...@humanconnectome.org" <hcp-...@humanconnectome.org>
Subject: Re: [hcp-users] Re: Input to FSL

 

 

I really am grateful for your help. 

I tried the first level analysis, it worked. I got some activation on the cortex, but when it comes to the subcortical regions it seems too noisy.

 

However, I still am not sure if the variables that I am setting are optimal, and even in some cases I am not sure what they should refer to. 

I have included some questions here for clarification and I would really be grateful if you could help me.

 


for TaskName in ${TaskNameList}
do
        LevelOneTasksList="tfMRI_${TaskName}_AP" #Delimit runs with @ and tasks with space
        LevelOneFSFsList="tfMRI_${TaskName}_AP" #Delimit runs with @ and tasks with space
        LevelTwoTaskList="NONE" #Space delimited list
        LevelTwoFSFList="NONE" #Space delimited list

        SmoothingList="2" #Space delimited list for setting different final smoothings.  2mm is no more smoothing (above minimal preprocessing pipelines grayordinates smoothing).  Smoothing is added onto minimal preprocessing smoothing to reach desired amount

         %%%%% (1) Do you think increasing the SmoothingList can be of any help? in terms of finding something meaningful in subcortical regions? 
        LowResMesh="32" #32 if using HCP minimal preprocessing pipeline outputs
        GrayOrdinatesResolution="2" #2mm if using HCP minimal preprocessing pipeline outputs
        OriginalSmoothingFWHM="2" #2mm if using HCP minimal preprocessing pipeline outputes
        Confound="NONE" #File located in ${SubjectID}/MNINonLinear/Results/${fMRIName} or NONE

        %%%%% (2) What should Confound variable point to? I tried "/100102/MNINonLinear/Results/tfMRI_VISION1" but it gave me errors, I think it should point to a specific file or folder but what is the name of that file? 
        HighpassFilter="200" #Use 2000 for linear detrend, 200 is default for HCP task fMRI, NONE to turn off

        %%%%% (3)  This is the high pass filtering used by fsl right?  Should we modify it according to the task paradigm that we have? 
        VolumeBasedProcessing="NO" #YES or NO. CAUTION: Only use YES if you want unconstrained volumetric blurring of your data, otherwise set to NO for faster, less biased, and more senstive processing (grayordinates results do not use unconstrained volumetric blurring and are always produced).

        %%%%% (4)  Is my understanding about this variable correct? if it is set to be "Yes" we will have a volume based processing in the NIFTI space, if set to "NO", we have two streams of processing one for surface preprocessing and one for subcortical processing. If I am wrong, can you correct me?
        RegNames="MSMAll_Test" # Use NONE to use the default surface registration 

        ProcSTRING="hp0_clean" #Any preprocessing beyond CIFTI mapping and surface registration, e.g. spatial and temporal ICA cleanup or NONE

        %%%%% (5)  According to the output of DeDrift part, which is as in the figure below, the naming of variables called "RegNames" and "ProcSTRING" should change as above, correct? 

Asa Borzabadi

unread,
Jun 13, 2022, 10:56:54 AM6/13/22
to hcp-...@humanconnectome.org
Thank you so much for your help.

With regard to point (1) I see, but when using parcellation instead of smoothing, we are somehow loosing the spatial precision if the area we are interested in is not a distinct parcel by itself, right?
I did use the parcellation approach and the ultimate result was a set of parcels, some being more active than the others for a certain contrast but I did not have any vertex-wise information about the activation. This is how it works, right?

With regard to point (4) where you mentioned we can use volumetric when we are not using any smoothing at all and are doing only individual subject analyses; I still assume that even if we are doing the individual subject analysis ( even for a single run of a subject) we do not still have to use the volumetric analysis, am I correct?
Yours sincerely,
Asa Farahani

Glasser, Matt

unread,
Jun 13, 2022, 11:48:33 AM6/13/22
to hcp-...@humanconnectome.org
  1. If you are interested in results at the brain area level (as most studies are) parcellation gives you the most sensitivity and power.  If you are interested in finer scale differences within areas, you probably shouldn’t be smoothing much anyway.  That said, the subcortical structures are definitely not a fine-grained parcellation.  We hope to improve upon this if we get the funding…
  2. .
  3. .
  4. No need to use volumetric if you don’t need it. 

Asa Borzabadi

unread,
Jun 13, 2022, 12:39:12 PM6/13/22
to hcp-...@humanconnectome.org
Thank you so much for the clarification. 

It is really helpful.

Regards,
Asa farahani

Reply all
Reply to author
Forward
0 new messages