--
You received this message because you are subscribed to the Google Groups "cpax_forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cpax_forum+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cpax_forum/5dda4fcb-d9ae-4cd1-856e-4eb195211b0an%40googlegroups.com.
--
You received this message because you are subscribed to a topic in the Google Groups "cpax_forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/cpax_forum/9tNpl3lrRT4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to cpax_forum+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cpax_forum/7b989eba-5b26-43f8-a6b8-cfab9f17ecd9n%40googlegroups.com.
--
You received this message because you are subscribed to the Google Groups "cpax_forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cpax_forum+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cpax_forum/cc0ead4e-65fd-4238-902b-a7b4f5b9e91dn%40googlegroups.com.
Hi Francesko,
I am a bit confused because I cannot see my last reply to your post. Is it because I replied with a private message or did the message just not send in the end?
[…]
I am writing to you again because I am almost positive that my last reply got fell through the cracks of the reply system. Most likely I made a mistake.
I hope this is not a redundant message and, if it is, I just want to clarify that it is not so because I intend to pressure you for an answer in any way.
Thanks for the clarification; I appreciate your patience waiting for a response. I have an email from you in this thread in my inbox that I don’t see on the forum, so I think you did privately reply to me. Nothing wrong with replying privately, but of course someone else might answer sooner than I can when the message is not just to me.
First of all, the exact command run was the one suggested by you:
singularity run c-pac\:release-v1.8.2.sif /$home/test_sub /$home/test_results/ participant --pipeline_file nhp_pip_diffmag_flag.yml
where the pipeline file has the flag for field distortion correction (diff_phase_map) and the number of cores per participant set to 1 to avoid the errors that we were discussing above in the thread.
[…]
Be it as it may, I managed to finally run the command with the options organized as you suggested
singularity run c-pac\:release-v1.8.2.sif /$home/test_sub /$home/test_results/ participant --pipeline_file nhp_pip_diffmag_flag.yml --n_cpus 1
but there were a few problems that I managed to overcome through some not-so-neat workarounds.
I think/hope setting the number of cores to 1 is unnecessary in v ≥ 1.8.2, but that setting shouldn’t hurt anything but speed.
The first error I encounter is with the anatomical preprocessing steps in the --nhp-macaque preconfig file. This seems to do a very bad job with my data, even if those are macaque anatomical scans.
I managed to overcome this by using the --monkey preconfig file as a basis for pipeline file. In this way the acpc allignment and the skull stripping seem to work just fine. I did not have time to look into the particular steps of the preprocessing that differ between the two, my apologies. I wanted to have first a pipeline which could go through all the steps before going into the details.
[…]
1. Non-optimal processing of the anatomical data with the nhp-macaque preconfig file (Workaround --> switched to "monkey" preconfig file)
Here are the meaningful differences between those two pipelines:
monkey
anatomical_preproc:
acpc_alignment:
FOV_crop: flirt
T1w_brain_ACPC_template: /cpac_templates/MacaqueYerkes19_T1w_0.5mm_brain.nii.gz
T2w_ACPC_template: /cpac_templates/MacaqueYerkes19_T2w_0.5mm.nii.gz
T2w_brain_ACPC_template: /cpac_templates/MacaqueYerkes19_T2w_0.5mm_brain.nii.gz
segmentation:
run: On
tissue_segmentation:
using: [ANTs_Prior_Based]
registration_workflows:
functional_registration:
coregistration:
interpolation: spline
cost: mutualinfo
arguments: -searchrx -30 30 -searchry -30 30 -searchrz -30 30
voxel_mirrored_homotopic_connectivity:
symmetric_registration:
T1w_brain_template_symmetric_for_resample: /cpac_templates/resources/MacaqueYerkes19_T1w_1.0mm.nii.gz
nhp-macaque
anatomical_preproc:
acpc_alignment:
FOV_crop: robustfov
T1w_brain_ACPC_template: /usr/share/fsl/5.0/data/standard/MNI152_T1_1mm_brain.nii.gz
T2w_ACPC_template:
T2w_brain_ACPC_template:
segmentation:
run: Off
tissue_segmentation:
using: [FSL-FAST]
registration_workflows:
functional_registration:
coregistration:
interpolation: trilinear
cost: corratio
arguments:
voxel_mirrored_homotopic_connectivity:
symmetric_registration:
T1w_brain_template_symmetric_for_resample: /cpac_templates/MacaqueYerkes19_T1w_1.0mm.nii.gz
It looks to me like nhp-macaque may be incomplete:
Second, there is a problem with the functional data (with both nhp-macaque and monkey preconfig) when it comes to the slice timing. I thing the issue is that the scanner used for the data acquisition is a vertical scanner, and AFNI is confusing the number of slices with the y direction of the in_plane acqusition matrix. To overcome this I just kicked out the slice timing from the step of my pre-processing, although I don't think this is a very good solution.
[…]
Runtime errors caused by the slice-timing correction subroutine, as it reads the incorrect number of slices, as the functional images acquired in vertical scanner (Workaround --> switched off the slice timing flag in the pipeline config file)
I don’t have an intuition here, but it definitely sounds like something to fix.
Third, the pipeline still crashes after those workarounds when it comes to registering the functional data to the anatomical data it seems. The error is not clear, but it seems like the pipeline can't find some ANT templates to use for the coregistration. Additionally, I think it is complaining about an excessive use of resources when.
[…]
To try to overcome this issue, I first increased the number of cores available for the participant to 6 (with mem_gb = 30), but it still crashes at about the same point it seems, this time with an additional note.
The excessive usage and report generation warnings are probably safe to ignore ― they’re just alerting node-specific estimage / usage mismatches, but by v1.8.2, they should be pretty small differences. The report failed / report generation failed messages are curious, but also probably sage to ignore.
Finally, I tried changing the method from ANTs based coregistration to Freesurfer, but this didn't help either
[…]
However, after these two "corrections" I encounter a problem at the EPI2ANATOMY coregistration procedure as it seems
From the attached crashfile, I see “At least 2 warped image/label pairs needs to exist for jointFusion.” is being raised when running
antsJointLabelFusion.sh -d 3 -o ants_multiatlas_ \
-t /p/himmelbach/fmolla/phd/CrossSpeciesConnectivity/cpac_runs_monkey_1core/default/working/cpac_sub-1001_ses-1/brain_extraction_58/sub-1001_T1w_resample_warp_noise_corrected_corrected_calc.nii.gz \
-x /p/himmelbach/fmolla/phd/CrossSpeciesConnectivity/cpac_runs_monkey_1core/default/working/cpac_sub-1001_ses-1/refined_mask_53/MacaqueYerkes19_T1w_1_maths_trans_flirt_thresh.nii.gz \
-y b \
-c 0 \
-g s3://fcp-indi/resources/cpac/resources/MacaqueYerkes19_T1w_0.5mm/T1w_brain.nii.gz \
-l s3://fcp-indi/resources/cpac/resources/MacaqueYerkes19_T1w_0.5mm/Segmentation.nii.gz \
-g s3://fcp-indi/resources/cpac/resources/J_Macaque_11mo_atlas_nACQ_194x252x160space_0.5mm/T1w_brain.nii.gz \
-l s3://fcp-indi/resources/cpac/resources/J_Macaque_11mo_atlas_nACQ_194x252x160space_0.5mm/Segmentation.nii.gz
(The whole log from jointFusion included in that crashfile is
).
I don’t understand what’s going wrong here; it looks like we’re providing 2 image/label pairs but ANTs is complaining about having fewer than that. I opened an issue so we can track the debugging and include a fix in a future release.
I am attaching the log and the crash files as usual in case they are needed, as well as the pipeline and data_config file
[…]
I am attaching the log files for these attempts and the log and crash file for the default method defined in the "monkey" preconfig file (ANT coregistration at 1 core).
I think the attached crashfile is crash-20220302-175235-fmolla-seg_preproc_ants_prior_74_antsJointLabel-4dbdaa52-accb-43d5-affc-dde2d1c67efd.txt or crash-20220302-174132-fmolla-seg_preproc_ants_prior_74_antsJointLabel-149fb67a-2075-468e-93db-1a0d9df2986d.txt ― that 6 cores and 1 core crash in the same place is encouraging that this issue isn’t an issue of resource allocation.
If you still have them, could you also share crash-20220302-171318-fmolla-CerebrospinalFluid_Functional_flirt.a0-9e8edb73-caaa-4a4b-b814-d7f1af103b2a.txt and crash-20220302-171318-fmolla-WhiteMatter_Functional_flirt.a0-9bf45e08-772a-417f-bd4f-c1b73df41ee1.txt? If I understand correctly, you want to use ANTs for coregistration and these crashes occurred with FreeSurfer coregistration, but those files would still be useful for us to diagnose those crashes, even if they’re a lower priority.
Thanks again for reaching out with these questions and supporting information, and for bearing with these long responses.