I’m honestly not sure why the task level 3 (group) analysis script is not in the HCP Pipelines repo. I had one in my working copy of the HCP Pipelines, which I have just committed under the task analysis folder. Consider it beta, as I’m pretty sure it was working for me, but it hasn’t been public before. If you have only one run per subject, you skip level 2 and go straight to level 3.
Level3 .fsf files have the attached (example level 3 fsf for 28 subjects) sort of pattern (change “28” in that file to however many subjects you have and then find the two sections that have 1-28 entries and change them to 1-however many subjects you have).
Hope this helps,
Matt.
--
You received this message because you are subscribed to the Google Groups "HCP-Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
hcp-users+...@humanconnectome.org.
To view this discussion on the web visit
https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/5ff3f85e-e03e-4291-8899-70fc5dcff7ccn%40humanconnectome.org.
The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail.
Right. Usually something like:
${StudyFolder}/${GroupName}
At the same level as
${StudyFolder}/${Subject}
It does not have to exist.
It could be that you need to make summary directories at level 1 for this to work.
Matt.
From: Austin Cooper <austin....@gmail.com>
Date: Friday, November 24, 2023 at 12:29 PM
To: HCP-Users <hcp-...@humanconnectome.org>
Cc: Austin Cooper <austin....@gmail.com>, "Glasser, Matt" <glas...@wustl.edu>
Subject: Re: [hcp-users] Group-Level Z-Stat
I decided to just convert the design.con to Contrasts.txt and it alleviates the error. I hope this is the right method.
Now I receive this as the error before the script stops: "Can't find key fmri(level)"
Any ideas?
On Friday, 24 November 2023 at 11:53:20 UTC-5 Austin Cooper wrote:
Got it! Thank you!
I now receive an error as such:
Fri Nov 24 11:03:17 EST 2023
cat: /project/6001995/Shmuel_Mendola/Data/MGH/HCPNaming//01/MNINonLinear/Results/all_fMRI_data/all_fMRI_data_hp200_s2_level1_MSMAll_hp0_clean_HCP_MMPv1.feat/Contrasts.txt: No such file or directory
My first level analysis did not produce a file named "contrasts.txt", though it has run to completion successfully. How is this file supposed to be set?
Use the --summaryname option on the Level1+2 task analysis script to make level 2-like folders.
Unfortunately, I didn’t write this part of the code any the person who did left. If you can sort it out, we would love to get the patch to fix it. You could try running to task analysis on HCP data first with one or two runs to see how it works and then see if you can figure out the issue.
Actually, I have run it in this usecase before. I have both pe and cope in the parcellated folder for level1. In the Summary directory, I have the appropriate files for level3 (including the files that triggered the initial errors that you reported), I believe; however, I did not ever run Level3 in this situation (at least I would not have gotten the error that you reported though). Thus, I am at a bit of a loss as to the issues you are having. Perhaps do as I suggested and run this on some HCP data to prove it can work on your end? Here is what an example run looks like for me:

Matt.
To view this discussion on the web visit https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/000d8410-0ac6-4780-a3cf-3a8457f7b3can%40humanconnectome.org.
If you aren’t getting errors when you run the task fMRI analysis pipeline and you aren’t getting the expected outputs, an issue with your .fsf is likely, because that encodes all the stats.
Matt.
From: Austin Cooper <austin....@gmail.com>
Date: Thursday, November 30, 2023 at 4:05 PM
To: HCP-Users <hcp-...@humanconnectome.org>
Cc: "Glasser, Matt" <glas...@wustl.edu>, Austin Cooper <austin....@gmail.com>
Subject: Re: [hcp-users] Group-Level Z-Stat
Interesting. thank you for sharing, Matt.
I just compared my level 1 .fsf file to the standard HCP ones and saw that there was a difference which is likely the cause. HCP .fsf files have:
# Carry out post-stats steps?
set fmri(poststats_yn) 1
And I had:
# Carry out post-stats steps?
set fmri(poststats_yn) 0
I hope this solves the issue.
Will post when I know.
Warm regards,
Austin
On Tuesday, 28 November 2023 at 18:17:53 UTC-5 glas...@wustl.edu wrote:
Actually, I have run it in this usecase before. I have both pe and cope in the parcellated folder for level1. In the Summary directory, I have the appropriate files for level3 (including the files that triggered the initial errors that you reported), I believe; however, I did not ever run Level3 in this situation (at least I would not have gotten the error that you reported though). Thus, I am at a bit of a loss as to the issues you are having. Perhaps do as I suggested and run this on some HCP data to prove it can work on your end? Here is what an example run looks like for me:
Matt.
That is the second opened folder towards the bottom. A few zstats are cut off at the bottom, but otherwise that is a complete set of outputs.
I don’t think so I probably deleted it. That said, indeed it looks like a bunch of outputs are not created at all.
Out of curiosity, what happens if you attempt to run FEAT itself with your current .fsf on a subject’s NIFTI files?
The .fsf doesn’t actually matter for most of the mechanics of the task analysis pipeline but instead it is there to store the task design. Do the design.pngs that get spit out look correct? Can you paste them here?
Matt.
So to summarize:
Parcellated is expected to be super fast because it is a tiny amount of data being run through the GLM.
Error! Filename not specified.
Matt.
To view this discussion on the web visit https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/a613118b-e994-4345-b886-47e3c6419ca4n%40humanconnectome.org.
The current version of FSL is 6.0.7.7 released a couple of weeks ago. You have 6.0.4, which was released 3 years ago.
The main thing to check is whether the appropriate files are generated (and not just zstats).
You need some surfaces and volumes to display the .dtseries.nii files. Perhaps Jenn can point you to the Connectome Workbench tutorial and some sample .spec files for visualization.
To view this discussion on the web visit https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/3cd7bccf-55b2-49ea-8e33-f8cd36b727dbn%40humanconnectome.org.
Jennifer Elam, Ph.D.
Scientific Outreach, Human Connectome Project
Washington University School of Medicine
Department of Neuroscience, Box 8108
660 South Euclid Avenue
St. Louis, MO 63110
314-362-9387
el...@wustl.edu
www.humanconnectome.org
|
* External Email - Caution * |
Sounds like you need 32k surfaces.
To view this discussion on the web visit https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/2e1e41ac-5b47-414f-a94a-b731d82fc80fn%40humanconnectome.org.
Is your input fMRI data demeaned? That doesn’t work with FEAT.
So you have found that subcortical doesn’t work for you or parcellated, which both use a volume workflow (in a fake volume for parcellated) whereas surface uses a somewhat different workflow.
Okay I didn’t connect the two questions. So there are major problems trying to compare across runs because there is no guarantee that the means will be equivalent (due to MR Physics reasons). I think I said that in the past. This is why we demean the images before concatenating them, so that we are not affected by such issues. Also, if you are comparing conditions that are only present within the separate runs, some or all of your effect of interest could end up in the image mean.
Separately from all of that, you need to have a mean image to use FILM properly with the volume workflow. You could add the grand mean (across all the runs) back into the images to get it to run; however, that does not address the problems I mentioned above for your specific analysis. Perhaps a better strategy would be to run ICA on the concatenated data and look for components that are present in only one kind of run. If you have enough data, temporal ICA is quite good at this (and will pull out things like task block designs; see Glasser et al., 2018 Neuroimage).
To view this discussion on the web visit https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/eaf3db59-0eb7-4671-a357-e3d56dc2c537n%40humanconnectome.org.