nhp-macaque pipeline

136 views
Skip to first unread message

Francesco Molla

unread,
Jan 25, 2022, 7:20:48 AM1/25/22
to cpax_forum
Dear cpac community,

I am trying to run the cpac pipeline on the publicly available data from the INDI-PRIME project (http://fcon_1000.projects.nitrc.org/indi/PRIME/uwmadison.html).

After setting up the pipeline to run using the Singularity container, and pulling the most up-to-date image from "shub://FCP-INDI/C-PAC" I tried to run some test to see if everything is configured correctly.

I do so by running the command:

> cpac --platform singularity run    /$home/sub-1001    /$home/test_results/    test_config    --preconfig anat-only

and successively

> cpac --platform singularity run    /$home/sub-1001    /$home/test_results/    participant --preconfig anat-only

Both commands run properly producing consistent files in the log, output and crash directories, which tells me (I guess) that my pipeline and the environment in which I am working is set up correctly. (note: which crashes are not important now as I won't be using this particular pre-configured pipeline)

The problem arises when I try to do the same thing for the nhp-macaque preconfig file.

> cpac --platform singularity run    /$home/sub-1001   /$home/test_results/    participant    --preconfig nhp-macaque

which gives me an output in the terminal that looks like my attached file "terminal_out_nhp_macaque.txt". Unfortunately after the last line echoed on the terminal, nothing happens anymore and I don't seem to understand what is going on exactly, except for the creation of a "/$home/cpac_runs/defaults/working/"  directory containing a "pid.txt" file which doesn't correspond to anything running on the machine whatsoever.

For completeness I am attaching the log files from the anat_only pipeline too, as well as the pipeline configurations for both the anat_only and the nhp-macaque.
The cpac version is "cpac 0.3.2.post1", obtained through the command

> cpac run --version

Any help would be greatly appreciated. Thank you very much for your time and attention.

Best regards,
Francesko






cpac_pipeline_config_anat_only_min.yml
terminal_out_nhp_macaque.txt
cpac_pipeline_config_nhp_macaque_min.yml

Jon Clucas, MIS

unread,
Feb 1, 2022, 4:50:33 PM2/1/22
to cpax_forum
Hi Francesko,

Apologies for not getting this update out sooner, but Singularity Hub (shub) was deprecated last spring, so the most up-to-date image from shub://FCP-INDI/C-PAC will forever be v1.8.0. In response to this post, we've finally released cpac v0.4.0, which replaces all deprecated shub:// references in cpac to avoid this issue going forward

Hanging indefinitely is usually a sign of an out-of-memory error, and a pid not corresponding to any running process reinforces that sign. v1.8.0 did include memory optimizations, but v1.8.1 and v1.8.2 include further memory optimizations.

A few things you can try:
None of these suggestions are mutually exclusive, and any of them could potentially get you past this hurdle.

Please let us know if any of these suggestions help (or if you try them and the same behavior persists).

Thanks,
Jon

Francesco Molla

unread,
Feb 21, 2022, 10:22:20 AM2/21/22
to cpax_...@googlegroups.com
Dear Jon,


First of al, my apologies for the late answer.
I upgraded to the 0.4.0 version of the pipeline and pulled the image v1.8.2 as you suggested. Then I run the command as you suggest in the third point (adding the image name to the v1.8.2 to avoid an automatic pull).
However there is still some errors coming up.
The console output is the following:

Running BIDS validator
Loading the 'nhp-macaque' pre-configured pipeline.
#### Running C-PAC
Number of participants to run in parallel: 1
Input directory:  $home/test_sub
Output directory: $home/test_results/output
Working directory: ./cpac_runs/default/working
Log directory: $home/test_results/log
Remove working directory: False
Available memory: 9.0 (GB)
Available threads: 1
Number of threads for ANTs: 1
Parsing /p/himmelbach/fmolla/phd/CrossSpeciesConnectivity/test_sub..
Did not receive any parameters for sub-1001/fmap/sub-1001_acq-func_magnitude1.nii.gz, is this a problem?
Did not receive any parameters for sub-1001/fmap/sub-1001_acq-func_magnitude1.nii.gz, is this a problem?
Did not receive any parameters for sub-1001/fmap/sub-1001_acq-func_magnitude2.nii.gz, is this a problem?
Did not receive any parameters for sub-1001/fmap/sub-1001_acq-func_magnitude2.nii.gz, is this a problem?
Starting participant level processing
Run called with config file /p/himmelbach/fmolla/phd/CrossSpeciesConnectivity/test_results/cpac_pipeline_config_2022-02-21T16-10-39Z.yml
220221-16:10:42,722 nipype.workflow INFO:


[!] LOCKING CPUs PER PARTICIPANT TO 1 FOR U-NET MODEL.

This is a temporary measure due to a known issue preventing Nipype's parallelization from running U-Net properly.


220221-16:10:42,724 nipype.workflow INFO:

    Run command: run /p/himmelbach/fmolla/phd/CrossSpeciesConnectivity/test_sub/ /p/himmelbach/fmolla/phd/CrossSpeciesConnectivity/test_results/ test_config --preconfig nhp-macaque

    C-PAC version: 1.8.2.dev

    Setting maximum number of cores per participant to 1
    Setting number of participants at once to 1
    Setting OMP_NUM_THREADS to 1
    Setting MKL_NUM_THREADS to 1
    Setting ANTS/ITK thread usage to 1
    Maximum potential number of cores that might be used during this run: 1


220221-16:10:43,535 nipype.utils WARNING:
A newer version (1.7.0) of nipy/nipype is available. You are using 1.5.1
/code/CPAC/pipeline/cpac_runner.py:307: UserWarning: We recommend that the working directory full path should have less then 70 characters. Long paths might not work in your operating system.
  warnings.warn("We recommend that the working directory full path "
/code/CPAC/pipeline/cpac_runner.py:310: UserWarning: Current working directory: /p/himmelbach/fmolla/phd/CrossSpeciesConnectivity/cpac_runs/default/working
  warnings.warn("Current working directory: %s" % c.pipeline_setup['working_directory']['path'])
Traceback (most recent call last):
  File "/code/run.py", line 743, in <module>
    run_main()
  File "/code/run.py", line 724, in run_main
    test_config=(1 if args.analysis_level == "test_config" else 0)
  File "/code/CPAC/pipeline/cpac_runner.py", line 565, in run
    raise e
  File "/code/CPAC/pipeline/cpac_runner.py", line 562, in run
    p_name, plugin, plugin_args, test_config)
  File "/code/CPAC/pipeline/cpac_pipeline.py", line 413, in run_workflow
    subject_id, sub_dict, c, p_name, num_ants_cores
  File "/code/CPAC/pipeline/cpac_pipeline.py", line 1070, in build_workflow
    wf, rpool = initiate_rpool(wf, cfg, sub_dict)
  File "/code/CPAC/pipeline/engine.py", line 1880, in initiate_rpool
    part_id, ses_id)
  File "/code/CPAC/pipeline/engine.py", line 1451, in ingress_raw_func_data
    data_paths['creds_path'], ses_id)
  File "/code/CPAC/utils/datasource.py", line 451, in ingress_func_metadata
    node, out_file = rpool.get('diffphase_dwell')[
  File "/code/CPAC/pipeline/engine.py", line 244, in get
    raise LookupError("\n\n[!] C-PAC says: The listed resource is "
LookupError:

[!] C-PAC says: The listed resource is not in the resource pool:
diffphase_dwell

Developer Note: This may be due to a mismatch between the node block's docstring 'input' field and a strat_pool.get_data() call within the block function.

I am not sure if this is a problem with the data and if I can switch off the lookup for this part of the processing by changing a flag inside the .yml file, or if this is a wrong definition of the path to the data and I am missing something. Although it looks like the BIDS validator is identifying the data correctly from the output. 

Thank you very much for your help.

Best,
Francesko



--
You received this message because you are subscribed to the Google Groups "cpax_forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cpax_forum+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cpax_forum/5dda4fcb-d9ae-4cd1-856e-4eb195211b0an%40googlegroups.com.

Jon Clucas, MIS

unread,
Feb 22, 2022, 10:20:04 AM2/22/22
to cpax_forum
Hi Francesko,

We have a known issue where if you have diffmag but no diffphase in the field maps subdirectory for a subject in your input BIDS directory, the error you posted (LookupError: [!] C-PAC says: The listed resource is not in the resource pool: diffphase_dwell) occurs. We'll have a fix for this issue in a future release, but in the meantime, a (admittedly frustrating) workaround is to take the data configuration file generated from the failed run and clearing out the diffmag entries.

🤞,

Jon

Francesco Molla

unread,
Feb 25, 2022, 9:52:31 AM2/25/22
to cpax_forum
Hi Jon,

Thanks for your answer. I did as you suggested but I am still running into quite unspecific errors.
To be more precise, the error that I am getting now is:

    singularity run c-pac\:release-v1.8.2.sif /$home/test_sub /$home/test_results/ participant --data_config_file nhp_pip_diffmag_flag.yml --n_cpus 1

    #### Running C-PAC
    Number of participants to run in parallel: 1
    Output directory: /$home/test_results/output
    Working directory: /tmp
    Log directory: /$home/test_results/log
    Remove working directory: True
    Available memory: 1.0 (GB)

    Available threads: 1
    Number of threads for ANTs: 1
    Traceback (most recent call last):
    File "/code/run.py", line 743, in <module>
    run_main()
    File "/code/run.py", line 646, in run_main
    if isinstance(sub.get('anat'), dict):
    AttributeError: 'str' object has no attribute 'get'

It seems to me like the pipeline is not able to read the data in the test_sub directory or that I am doing something wrong somehow.
Before running the image with singularity, what I do is define and export a $SINGULARITY_HOME variable to be equal to the $home directory in which I am working. I do this because I am not a user with root admin in my cluster where singularity is installed. Maybe this might be the source for the confusion? Or could it be that the folder tree for my subject is not what the pipeline is expecting? For completeness, I am attaching a file with the directory tree for my test subject.

Thanks again for your help.

Best,
Francesko


directory_tree.txt

Jon Clucas, MIS

unread,
Feb 25, 2022, 3:47:05 PM2/25/22
to cpax_forum
Hi Francesko,

Hopefully we can get this working for you. It looks like C-PAC is having trouble parsing nhp_pip_diffmag_flag.yml. Could you please share that file?

Thanks,

Jon

Francesco Molla

unread,
Feb 26, 2022, 5:37:23 AM2/26/22
to cpax_...@googlegroups.com
Sure, here it is. Thanks a lot.



--
You received this message because you are subscribed to a topic in the Google Groups "cpax_forum" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/cpax_forum/9tNpl3lrRT4/unsubscribe.
To unsubscribe from this group and all its topics, send an email to cpax_forum+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cpax_forum/7b989eba-5b26-43f8-a6b8-cfab9f17ecd9n%40googlegroups.com.
nhp_pip_diffmag_flag.yml

Jon Clucas, MIS

unread,
Feb 28, 2022, 11:18:42 AM2/28/22
to cpax_forum
Thanks!

That file is a pipeline configuration (rather than a data configuration "Yaml file containing the location of the data that is to be processed. This file is not necessary if the data in bids_dir is organized according to the BIDS format."), so the solution might be as simple as replacing data_config_file with pipeline_file in your run command, e.g.,

singularity run c-pac\:release-v1.8.2.sif /$home/test_sub /$home/test_results/ participant --pipeline_file nhp_pip_diffmag_flag.yml --n_cpus 1

Please let us know how it goes!

Thanks again,

Jon

Francesco Molla

unread,
Mar 3, 2022, 5:18:03 AM3/3/22
to cpax_forum
Hi Jon,

I am a bit confused because I cannot see my last reply to your post. Is it because I replied with a private message or did the message just not send in the end?

Best,
Francesko

Francesco Molla

unread,
Mar 7, 2022, 7:16:15 AM3/7/22
to cpax_...@googlegroups.com
Hi Jon,

I am writing to you again because I am almost positive that my last reply got fell through the cracks of the reply system. Most likely I made a mistake.
I hope this is not a redundant message and, if it is, I just want to clarify that it is not so because I intend to pressure you for an answer in any way.

Be it as it may, I managed to finally run the command with the options organized as you suggested.

    singularity run c-pac\:release-v1.8.2.sif /$home/test_sub /$home/test_results/ participant --pipeline_file nhp_pip_diffmag_flag.yml --n_cpus 1

but there were a few problems that I managed to overcome through some not-so-neat workarounds. These problems were
1. Non-optimal processing of the anatomical data with the nhp-macaque preconfig file (Workaround --> switched to "monkey" preconfig file)
2. Runtime errors caused by the slice-timing correction subroutine, as it reads the incorrect number of slices, as the functional images acquired in vertical scanner (Workaround --> switched off the slice timing flag in the pipeline config file)
In case you need to know more about these issues, let me know I can give you more details.

However, after these two "corrections" I encounter a problem at the EPI2ANATOMY coregistration procedure as it seems, with the following output.

    220302-17:42:02,184 nipype.workflow ERROR:
    could not run node: cpac_sub-1001_ses-1.seg_preproc_ants_prior_74.seg_preproc_ants_prior_74_antsJointLabel
    220302-17:42:02,184 nipype.workflow INFO:
    crashfile: /p/himmelbach/fmolla/phd/CrossSpeciesConnectivity/test_results/log/crash-20220302-174132-fmolla-seg_preproc_ants_prior_74_antsJointLabel-149fb67a-2075-468e-93db-1a0d9df2986d.txt
    220302-17:42:02,185 nipype.workflow INFO:
    ***********************************
    Excessive usage report failed for /p/himmelbach/fmolla/phd/CrossSpeciesConnectivity/test_results/log/pipeline_cpac_default_monkey_skullstrip/sub-1001_ses-1/callback.log  
    Report generation failed for /p/himmelbach/fmolla/phd/CrossSpeciesConnectivity/test_results/log/pipeline_cpac_default_monkey_skullstrip/sub-1001_ses-1/callback.log
    NoneType: None
    220302-17:42:02,206 nipype.workflow INFO:

I tried two different workarounds and their combinations, namely changing the co-registration toolbox used from ANTs to Freesurfer and increasing the number of CPUs and memory allocated for the subject (6 cores and 30 gb respectively), but none of these steps worked.

I am attaching the log files for these attempts and the log and crash file for the default method defined in the "monkey" preconfig file (ANT coregistration at 1 core).

Any help would be immensely appreciated, as I don't really know how I could proceed otherwise.

Best,
Francesko



--
You received this message because you are subscribed to the Google Groups "cpax_forum" group.
To unsubscribe from this group and stop receiving emails from it, send an email to cpax_forum+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/cpax_forum/cc0ead4e-65fd-4238-902b-a7b4f5b9e91dn%40googlegroups.com.
default_ant_coregistration_6cores_log
default_ant_coregistration_crash
default_ant_coregistration_log
freesurfer_coregistration_log

Jon Clucas, MIS

unread,
Mar 10, 2022, 4:13:43 PM3/10/22
to cpax_forum

Hi Francesko, 


I am a bit confused because I cannot see my last reply to your post. Is it because I replied with a private message or did the message just not send in the end? 

[…] 

I am writing to you again because I am almost positive that my last reply got fell through the cracks of the reply system. Most likely I made a mistake. 

I hope this is not a redundant message and, if it is, I just want to clarify that it is not so because I intend to pressure you for an answer in any way. 

Thanks for the clarification; I appreciate your patience waiting for a response. I have an email from you in this thread in my inbox that I don’t see on the forum, so I think you did privately reply to me. Nothing wrong with replying privately, but of course someone else might answer sooner than I can when the message is not just to me.


First of all, the exact command run was the one suggested by you: 

singularity run c-pac\:release-v1.8.2.sif /$home/test_sub /$home/test_results/ participant --pipeline_file nhp_pip_diffmag_flag.yml 

where the pipeline file has the flag for field distortion correction (diff_phase_map) and  the number of cores per participant set to 1 to avoid the errors that we were discussing above in the thread. 

[…] 

Be it as it may, I managed to finally run the command with the options organized as you suggested 

    singularity run c-pac\:release-v1.8.2.sif /$home/test_sub /$home/test_results/ participant --pipeline_file nhp_pip_diffmag_flag.yml --n_cpus 1 

but there were a few problems that I managed to overcome through some not-so-neat workarounds.

I think/hope setting the number of cores to 1 is unnecessary in v ≥ 1.8.2, but that setting shouldn’t hurt anything but speed. 

 

The first error I encounter is with the anatomical preprocessing steps in the --nhp-macaque preconfig file. This seems to do a very bad job with my data, even if those are macaque anatomical scans. 

I managed to overcome this by using the --monkey preconfig file as a basis for pipeline file. In this way the acpc allignment and the skull stripping seem to work just fine. I did not have time to look into the particular steps of the preprocessing that differ between the two, my apologies. I wanted to have first a pipeline which could go through all the steps before going into the details. 

[…] 

1. Non-optimal processing of the anatomical data with the nhp-macaque preconfig file (Workaround --> switched to "monkey" preconfig file) 

Here are the meaningful differences between those two pipelines:

monkey 

anatomical_preproc:
  acpc_alignment:
 
  
FOV_crop: flirt
 
   T1w_brain_ACPC_template: /cpac_templates/MacaqueYerkes19_T1w_0.5mm_brain.nii.gz
 
   T2w_ACPC_template: /
cpac_templates/MacaqueYerkes19_T2w_0.5mm.nii.gz
  
  T2w_brain_ACPC_template: /
cpac_templates/MacaqueYerkes19_T2w_0.5mm_brain.nii.gz
segmentation:
 
run: On
 
tissue_segmentation:
 
   using: [ANTs_Prior_Based]
registration_workflows:
 
functional_registration:
 
   coregistration:
 
     interpolation: spline
 
     cost: mutualinfo
 
     arguments: -searchrx -30 30 -searchry -30 30 -searchrz -30 30
voxel_mirrored_homotopic_connectivity:
 
symmetric_registration:
 
   T1w_brain_template_symmetric_for_resample: /cpac_templates/resources/MacaqueYerkes19_T1w_1.0mm.nii.gz

nhp-macaque

anatomical_preproc:
 
acpc_alignment:
 
   FOV_crop: robustfov
 
   T1w_brain_ACPC_template: /usr/share/fsl/5.0/data/standard/MNI152_T1_1mm_brain.nii.gz
 
   T2w_ACPC_template:
 
   T2w_brain_ACPC_template:
segmentation:
 
run: Off
 
tissue_segmentation:
 
   using: [FSL-FAST]
registration_workflows
:
 
functional_registration:
 
   coregistration:
 
     interpolation: trilinear
 
     cost: corratio
 
     arguments:
voxel_mirrored_homotopic_connectivity:
 
symmetric_registration:
 
   T1w_brain_template_symmetric_for_resample: /cpac_templates/MacaqueYerkes19_T1w_1.0mm.nii.gz

It looks to me like nhp-macaque may be incomplete:

  • monkey is included in our user docs but nhp-macaque isn’t;
  • the FOV_crop comment “# Default: robustfov for human data, flirt for monkey data.” concerns me with nhp-macaque set to ‘robustfov’;
  • T2w templates are left unset for nhp-macaque.
The T1w_brain_template_symmetric_for_resample path for monkey seems to be wrong, though, and correct in nhp-macaque.

Second, there is a problem with the functional data (with both nhp-macaque and monkey preconfig) when it comes to the slice timing. I thing the issue is that the scanner used for the data acquisition is a vertical scanner, and AFNI is confusing the number of slices with the y direction of the in_plane acqusition matrix. To overcome this I just kicked out the slice timing from the step of my pre-processing, although I don't think this is a very good solution. 

[…] 

Runtime errors caused by the slice-timing correction subroutine, as it reads the incorrect number of slices, as the functional images acquired in vertical scanner (Workaround --> switched off the slice timing flag in the pipeline config file) 

I don’t have an intuition here, but it definitely sounds like something to fix.


Third, the pipeline still crashes after those workarounds when it comes to registering the functional data to the anatomical data it seems. The error is not clear, but it seems like the pipeline can't find some ANT templates to use for the coregistration. Additionally, I think it is complaining about an excessive use of resources when. 

[…] 

To try to overcome this issue, I first increased the number of cores available for the participant to 6 (with mem_gb = 30), but it still crashes at about the same point it seems, this time with an additional note. 

The excessive usage and report generation warnings are probably safe to ignore ― they’re just alerting node-specific estimage / usage mismatches, but by v1.8.2, they should be pretty small differences. The report failed / report generation failed messages are curious, but also probably sage to ignore. 


Finally, I tried changing the method from ANTs based coregistration to Freesurfer, but this didn't help either 

[…] 

However, after these two "corrections" I encounter a problem at the EPI2ANATOMY coregistration procedure as it seems

From the attached crashfile, I see “At least 2 warped image/label pairs needs to exist for jointFusion.” is being raised when running


antsJointLabelFusion.sh -d 3 -o ants_multiatlas_ \
-t /p/
himmelbach/fmolla/phd/CrossSpeciesConnectivity/cpac_runs_monkey_1core/default/working/cpac_sub-1001_ses-1/brain_extraction_58/sub-1001_T1w_resample_warp_noise_corrected_corrected_calc.nii.gz \
-x /p/
himmelbach/fmolla/phd/CrossSpeciesConnectivity/cpac_runs_monkey_1core/default/working/cpac_sub-1001_ses-1/refined_mask_53/MacaqueYerkes19_T1w_1_maths_trans_flirt_thresh.nii.gz \
-y b \

-c 0 \
-g s3://fcp-indi/resources/cpac/resources/MacaqueYerkes19_T1w_0.5mm/T1w_brain.nii.gz \
-l s3://fcp-indi/resources/cpac/resources/MacaqueYerkes19_T1w_0.5mm/Segmentation.nii.gz \
-g s3://fcp-indi/resources/cpac/resources/J_Macaque_11mo_atlas_nACQ_194x252x160space_0.5mm/T1w_brain.nii.gz \
-l s3://fcp-indi/resources/cpac/resources/J_Macaque_11mo_atlas_nACQ_194x252x160space_0.5mm/Segmentation.nii.gz 

(The whole log from jointFusion included in that crashfile is

--------------------------------------------------------------------------------------
Start JLFization
--------------------------------------------------------------------------------------
 
--------------------------------------------------------------------------------------
Parameters
--------------------------------------------------------------------------------------
ANTSPATH is /usr/lib/ants
 
Dimensionality:           3
Output prefix:            ants_multiatlas_
Posteriors format:        
Target image:             /p/himmelbach/fmolla/phd/CrossSpeciesConnectivity/cpac_runs_monkey_1core/default/working/cpac_sub-1001_ses-1/brain_extraction_58/sub-1001_T1w_resample_warp_noise_corrected_corrected_calc.nii.gz
Atlas images:             s3://fcp-indi/resources/cpac/resources/MacaqueYerkes19_T1w_0.5mm/T1w_brain.nii.gz s3://fcp-indi/resources/cpac/resources/J_Macaque_11mo_atlas_nACQ_194x252x160space_0.5mm/T1w_brain.nii.gz
Atlas labels:             s3://fcp-indi/resources/cpac/resources/MacaqueYerkes19_T1w_0.5mm/Segmentation.nii.gz s3://fcp-indi/resources/cpac/resources/J_Macaque_11mo_atlas_nACQ_194x252x160space_0.5mm/Segmentation.nii.gz
Transformation:           b
 
Keep all images:          0
Processing type:          0
Number of cpu cores:      2
--------------------------------------------------------------------------------------
./job_T1w_brain_0.sh./job_T1w_brain_1.sh
 
--------------------------------------------------------------------------------------
Starting JLF
--------------------------------------------------------------------------------------
ants_multiatlas_T1w_brain_0_Warped.nii.gz
ants_multiatlas_T1w_brain_1_Warped.nii.gz
Error:  At least 2 warped image/label pairs needs to exist for jointFusion.

).

I don’t understand what’s going wrong here; it looks like we’re providing 2 image/label pairs but ANTs is complaining about having fewer than that. I opened an issue so we can track the debugging and include a fix in a future release. 

 

I am attaching the log and the crash files as usual in case they are needed, as well as the pipeline and data_config file 

[…] 

I am attaching the log files for these attempts and the log and crash file for the default method defined in the "monkey" preconfig file (ANT coregistration at 1 core).

I think the attached crashfile is crash-20220302-175235-fmolla-seg_preproc_ants_prior_74_antsJointLabel-4dbdaa52-accb-43d5-affc-dde2d1c67efd.txt or crash-20220302-174132-fmolla-seg_preproc_ants_prior_74_antsJointLabel-149fb67a-2075-468e-93db-1a0d9df2986d.txtthat 6 cores and 1 core crash in the same place is encouraging that this issue isn’t an issue of resource allocation.

If you still have them, could you also share crash-20220302-171318-fmolla-CerebrospinalFluid_Functional_flirt.a0-9e8edb73-caaa-4a4b-b814-d7f1af103b2a.txt and crash-20220302-171318-fmolla-WhiteMatter_Functional_flirt.a0-9bf45e08-772a-417f-bd4f-c1b73df41ee1.txt? If I understand correctly, you want to use ANTs for coregistration and these crashes occurred with FreeSurfer coregistration, but those files would still be useful for us to diagnose those crashes, even if they’re a lower priority. 

 

Thanks again for reaching out with these questions and supporting information, and for bearing with these long responses.

Francesco Molla

unread,
Mar 29, 2022, 10:00:25 AM3/29/22
to cpax_forum
Hi Jon,

I think I found out what went wrong by re-running the pipeline from scratch and deleting the output from previous runs (it was becoming a bit confusing).
There is an error occurring when the pipeline is trying to gather the lateral ventricles. 

>FileNotFoundError: File /usr/fsl/current/x86_64/data/atlases/HarvardOxford/HarvardOxford-lateral-ventricles-thr25-2mm.nii.gz does not exist!

I checked my fsl directory and indeed that file does not exist (obviously). However there is another file called " HarvardOxford-sub-maxprob-thr25-2mm.nii.gz"
It seems to me that “HarvardOxford-sub-maxprob-thr25-2mm.nii.gz” that is present in the fsl directory contains already the labels for the ventricles that are needed but the pipeline somehow expects binary masks for each one of those lables.
Should I produce these masks from the “HarvardOxford-sub-maxprob-thr25-2mm.nii.gz” file and put them in the folder with the correct names required from the pipeline? Or am I thinking about this in the wrong way?

I am attaching the crash files produced by the recent run of the mokey pipeline, and the pipeline file.

Best,
Francesko
crash and pipeline file.zip
Reply all
Reply to author
Forward
0 new messages