run data_setting.yml

190 views
Skip to first unread message

Moyan Li

unread,
Jun 14, 2021, 2:38:10 PM6/14/21
to cpax_forum
Hi,

When my data_setting.yml is ready, the instruction says that I need to run the following command:

singularity run \ 
-B /path/to/bids_dir \
-B /path/to/outputs \ 
-B /path/to/data/in/data_settings/1 \ 
-B /path/to/data/in/data_settings/2 \ 
C-PAC_latest.sif /path/to/bids_dir /path/to/outputs cli -- utils \ 
data_config build /path/to/data_settings.yml

  1. I am a bit confused about line 4 and line 5. Should I put the path to my file data_setting.yml in both line 4 and line 5? Why there are two data_setting file data_settings/1 and data_settings/2?
  2. If my data is not in the BIDS format but in the customed format, what should I put in /path/to/bids_dir?


Best,
Moyan

Jon Clucas, MIS

unread,
Jun 18, 2021, 3:47:52 PM6/18/21
to cpax_forum
Hi Moyan,

Thanks for reaching out with your questions.

    1. I am a bit confused about line 4 and line 5. Should I put the path to my file data_setting.yml in both line 4 and line 5? Why there are two data_setting file data_settings/1 and data_settings/2?

The documentation that includes the above-referenced command is the section "Generate configuration files for individual-level and group-level" in the user guide. I'll go step by step through that section which will hopefully answer your questions.

    Once you have a template, you can then configure the file as needed.

The template referred to here is the generated in the previous section of the documentation ("Create data_settings.yml template file"). If you're using v1.8.0, there's a small bug in the template (from a merge error). You can delete the merge conflict markers (change

# CPAC Data Settings File
<<<<<<< HEAD
# Version 1.8.0
=======
# Version 1.8.0
>>>>>>> master
#

to

# CPAC Data Settings File
# Version 1.8.0
#
) or just download the template from the development branch (which includes a fix for the merge error).

    You will need to use a text editor to fill in at least dataFormat, bidsBaseDir (if using BIDS data), outputSubjectListLocation, and subjectListName in your settings file before you build.

For this example, I'll show how to configure non-BIDS-formatted data.

# CPAC Data Settings File

# Version 1.8.0
#
# http://fcp-indi.github.io for more info.
#
# Use this file to generate the data configuration (participant list) YAML file by loading it via the 'Load Preset' button in the Data Configuration Builder UI, or via command line by providing it to the cpac_data_config_setup.py script with the --data_settings_file input flag.

# Select if data is organized using BIDS standard or a custom format.
# Options: 'BIDS' or 'Custom'
dataFormat: Custom

# Base directory of BIDS-organized data.
# BIDS Data Format only.
#
# This should be the path to the overarching directory containing the entire dataset.
bidsBaseDir: None

# File Path Template for Anatomical Files
# Custom Data Format only.
#
# Place tags for the appropriate data directory levels with the tags {site}, {participant}, and {session}. Only {participant} is required.
#
# Examples:
# /data/{site}/{participant}/{session}/anat/mprage.nii.gz
# /data/{site}/{participant}/anat.nii.gz
#
# See the User Guide for more detailed instructions.
anatomicalTemplate: /home/example/data/anatomical/{site}/{participant}/{session}_anat.nii.gz

# File Path Template for Functional Files
# Custom Data Format only.
#
# Place tags for the appropriate data directory levels with the tags {site}, {participant}, {session}, and {series}. Only {participant} is required.
#
# Examples:
# /data/{site}/{participant}/{session}/func/{series}_bold.nii.gz
# /data/{site}/{participant}/{series}/func.nii.gz
#
# See the User Guide for more detailed instructions.
functionalTemplate: /home/example/data/functional/{site}/{participant}/{session}_func.nii.gz

# Required if downloading data from a non-public S3 bucket on Amazon Web Services instead of using local files.
awsCredentialsFile: None

# Directory where CPAC should place data configuration files.
outputSubjectListLocation: /home/example/outputs

# A label to be appended to the generated participant list files.
subjectListName: example-non-BIDS

# Scan/Run ID for the Anatomical Scan
#
# Sometimes, there are multiple anatomical scans for each participant in a dataset.
#
# If this is the case, you can choose which anatomical scan to use for this participant by entering the identifier that makes the scan unique.
#
# Examples:
#
# BIDS dataset
# ../anat/sub-001_run-1_T1w.nii.gz
# ../anat/sub-001_run-2_T1w.nii.gz
# Pick the second with 'run-2'.
#
# Custom dataset
# Example use case: let's say most anatomicals in your dataset are '../mprage.nii.gz', but some participants only have '../anat1.nii.gz' and '../anat2.nii.gz'. You want the mprage.nii.gz files included, but only the anat2.nii.gz in the others.
#
# Place a wildcard (*) in the anatomical filepath template above (../*.nii.gz), then enter 'anat2' in this field to 'break the tie' for participants that have the 'anat1' and 'anat2' scans.
anatomical_scan: None

# For Slice Timing Correction.
# Custom Data Format only.
#
# Path to a .csv file (if not using BIDS-format JSON files) containing information about scan acquisition parameters.
#
# For instructions on how to create this file, see the User Guide.
#
# If 'None' is specified, CPAC will look for scan parameters information provided in the pipeline configuration file.
scanParametersCSV: None

# File Path Template for brain mask files.
# For anatomical skull-stripping.
# Both BIDS and Custom Data Formats.
# (Note: The BIDS specification is still in flux regarding anatomical derivatives - if using a BIDS data directory, use this field to specify the format of your anatomical brain mask file paths.)
#
# Place tags for the appropriate data directory levels with the tags {site}, {participant}, and {session}.
#
# Examples:
# /data/{site}/{participant}/{session}/{participant}_{session}_brain-mask.nii.gz
brain_mask_template: /home/example/data/brain-masks/{site}/{participant}/{session}_brain-mask.nii.gz

# File Path Template for Field Map Phase files
# For field-map based distortion correction.
# Custom Data Format only.
#
# Place tags for the appropriate data directory levels with the tags {site}, {participant}, and {session}.
#
# Examples:
# /data/{site}/{participant}/{session}/fmap/phase.nii.gz
# /data/{site}/{participant}/{session}/{participant}_{session}_phase.nii.gz
fieldMapPhase: /home/example/data/field-maps/{site}/{participant}/{session}_phase.nii.gz

# File Path Template for Field Map Magnitude files
# For field-map based distortion correction.
# Custom Data Format only.
#
# Place tags for the appropriate data directory levels with the tags {site}, {participant}, and {session}.
#
# Examples:
# /data/{site}/{participant}/{session}/fmap/magnitude.nii.gz
# /data/{site}/{participant}/{session}/{participant}_{session}_magnitude.nii.gz
fieldMapMagnitude: /home/example/data/field-maps/{site}/{participant}/{session}_magnitude.nii.gz

I've left out the subsetting fields at the bottom of the document since they're less relevant to your questions.

    Once your data_settings.yml file is ready, you can generate your data configuration file by running the following commands, binding any directories in your data_settings.yml to the same locations in the container:

"Binding any directories in your data_settings.yml" is what the lines

​-B /path/to/data/in/data_settings/1 \
-B /path/to/data/in/data_settings/2 \

are intended to represent, so for this data configuration, you'd bind the directories that you specified in the configuration, like

singularity run \
-B /home/example/outputs \
-B /home/example/data/anatomical \
-B /home/example/data/functional \
-B /home/example/data/brain-masks \
-B /home/example/data/field-maps \
-B /home/example/C-PAC-data-config \
C-PAC_latest.sif /home/example/data /home/example/outputs cli -- utils \
data_config build /home/example/C-PAC-data-config/data_settings.yml

if your data_settings.yml is saved in /home/example/C-PAC-data-config. Typically you want to bind the most specific directories you can to include all of the files you want your container to access. In this example, you could combine
-B /home/example/data/anatomical \
-B /home/example/data/functional \
-B /home/example/data/brain-masks \
-B /home/example/data/field-maps
into
-B /home/example/data
if there's nothing else in that directory that would be problematic for your container to access.


    2. If my data is not in the BIDS format but in the customed format, what should I put in /path/to/bids_dir?

The "Run on Docker" section of the documentation mostly also applies to Singularity, with the "Run on Singularity" section describing the differences.

Regarding this question, from "Run on Docker",

    Finally, to run the Docker container with a specific data configuration file (instead of providing a BIDS data directory):

    docker run -i --rm \
           -v /Users/You/any_directory:/bids_dataset \
           -v /Users/You/some_folder:/outputs \
           -v /tmp:/tmp \
           -v /Users/You/Documents:/configs \
           fcpindi/c-pac:latest /bids_dataset /outputs participant --data_config_file /configs/data_config.yml

    Note: we are still providing /bids_dataset to the bids_dir input parameter. However, we have mapped this to any directory on your machine, as C-PAC will not look for data in this directory when you provide a data configuration YAML with the --data_config_file flag. In addition, if the dataset in your data configuration file is not in BIDS format, just make sure to add the --skip_bids_validator flag at the end of your command to bypass the BIDS validation process.

The same "C-PAC will not look for data in this directory when you provide a data configuration YAML" applies to the data_config commandline utility. You can put any path in that position (which is only required because C-PAC is a BIDS app). If you're not using BIDS, that path isn't used but is still required for a complete command. In the example above, I passed the directory containing all the data subdirectories in that position.

Please let us know if anything is still unclear,

Jon Clucas, MIS
Associate Software Developer
Computational Neuroimaging Lab
Child Mind Institute
646.625.4319
childmind.org | Location

Moyan Li

unread,
Jun 21, 2021, 4:18:56 PM6/21/21
to cpax_forum
Hi Jon,
Thanks a lot for your detailed explanation. I have followed your instructions to generate the data configuration file. I am now trying to get the derivtive "FALFF" of the data. I have tried several days but I cannot make c-pac run correctly. If I use the default pipeline, and use the command below:

######################
cpac run /nfs/turbo/jiankanggroup/hcp_rest/data /nfs/turbo/jiankanggroup/hcp_rest/try/newdata participant --data_config_file /nfs/turbo/jiankanggroup/hcp_rest/try/data_config_try.yml --skip_bids_validator
######################

Then it always stuck at some point and did not have any progress for more than 3 days... I attached a screenshot for reference. 

WechatIMG108.jpeg

Also, If I design the pipeline by myself (set "OFF" for all the functional, longitudinal and anatomical preprocessing and only run the FALFF preprocessing), and use the following command:

####################
cpac run /nfs/turbo/jiankanggroup/hcp_rest/data /nfs/turbo/jiankanggroup/hcp_rest/try/newdata participant --data_config_file /nfs/turbo/jiankanggroup/hcp_rest/try/data_config_try.yml --pipeline_file /nfs/turbo/jiankanggroup/hcp_rest/try/pipeline_config_try2.yaml --skip_bids_validator
####################

c-pac will raise error and return nothing. 

I am quite new to c-pac and really confused about all these problems. I attached the pipeline_config file, data_setting file and the data of one of the participants in this email. I would appreciate if you could help me about these problems.
Many thanks!

Best,
Moyan
data_config_try.yml
data_settings.yml
pipeline_config_try2.yaml

Moyan Li

unread,
Jun 21, 2021, 4:28:17 PM6/21/21
to cpax_forum
Hi Jon, 

One of the file I forget to attached is the error I got when I run the default pipeline. I am not sure how to fix that.

截屏2021-06-21 下午4.26.17.png
Thanks!

Best,
Moyan

Jon Clucas

unread,
Jun 21, 2021, 5:35:23 PM6/21/21
to cpax_forum
Hi Moyan,

Do any of your attempted runs have output directories that include log files? If so, could you please share those log files?

Thanks,

Jon Clucas, MIS
Associate Software Developer
Computational Neuroimaging Lab
Child Mind Institute
646.625.4319
childmind.org | Location

This email message is intended only for the named recipient(s) above. It may contain confidential information. If you are not the intended recipient, you are hereby notified that any dissemination, distribution or copying of this email and any attachment(s) is prohibited. If you have received this email in error, please immediately notify the sender by replying to this email and permanently delete the message and any attachment(s).


Moyan Li

unread,
Jun 21, 2021, 8:49:09 PM6/21/21
to cpax_forum
Hi Jon,

Here is the log file I got when I use the default pipeline in https://fcp-indi.github.io/docs/latest/user/pipelines/default. It seems that after it gave me those warnings, it did not have any progress for a long time and I did not get any output file.

Best,
Moyan
log_default.zip

Moyan Li

unread,
Jun 21, 2021, 9:06:22 PM6/21/21
to cpax_forum
Hi Jon,

I also attached the log file when I change a little bit of the pipeline_config file based on the default pipeline.(Set "Off" for functional preprocessing and others remain the same as the default one). Now I will get the error message. I attached the log file, the pipeline_config file and the screen shot of the error I got.
Thanks for your help!

Best,
Moyan

log_no_func.zip
screen1.png
screen2.png
pipeline_default2.yaml

Jon Clucas

unread,
Jun 22, 2021, 11:36:14 AM6/22/21
to cpax_forum
Hi Moyan,

Thanks for the logs and additional information and configuration files. There are a lot of questions in this thread; I'll try to cover them all, but please let me know if I miss any.

Generally, if C-PAC stops running without completing or throwing an error, that means an out-of-memory error. These logs support the hypothesis that your pipeline is running out of memory at the node ​func_reoirent_87. I can see from screen1.png that you're running with the default memory limit of 6 GB. From log_default/100206_ses-1/callback.log, 

Jon Clucas, MIS
Associate Software Developer
Computational Neuroimaging Lab
Child Mind Institute
646.625.4319
childmind.org | Location


Jon Clucas

unread,
Jun 22, 2021, 12:22:38 PM6/22/21
to cpax_forum
Sorry, I sent that last message prematurely.

Generally, if C-PAC stops running without completing or throwing an error, that means an out-of-memory error. These logs support the hypothesis that your pipeline is running out of memory at the node ​func_reoirent_87. I can see from screen1.png that you're running with the default memory limit of 6 GB. From log_default/100206_ses-1/callback.log, it looks like your data has a high enough resolution (spatial, temporal or both) that I'd estimate you'd need at least 6.7 GB for the node that's deadlocking and/or crashing.

Depending on what your goals and hardware restrictions are, you could either provide more memory (with the --mem_gb flag in the run command, like 

cpac run /nfs/turbo/jiankanggroup/hcp_rest/data /nfs/turbo/jiankanggroup/hcp_rest/try/newdata participant --data_config_file /nfs/turbo/jiankanggroup/hcp_rest/try/data_config_try.yml --skip_bids_validator --mem_gb 7

to allocate 7 GB of memory) or downsample your input data (use a lower spatial resolution or fewer timepoints).

Alternatively, if you have the data already preprocessed up to or beyond that step, you can arrange those data in BIDS Derivatives format and use that BIDS Derivatives directory as either your input BIDS directory or your output directory (which might also be the answer to your question here).

Question Response
I encountered a problem when I run the default C-PAC pipeline with conmmand:

##########
cpac run /nfs/turbo/jiankanggroup/hcp_rest/data /nfs/turbo/jiankanggroup/hcp_rest/try/newdata/outputs participant --data_config_file /nfs/turbo/jiankanggroup/hcp_rest/try/data_config_try.yml --skip_bids_validator
###########

It seems that it just stuck at some point when running "slicing timing". After it showed the warnings below, the procedure did not continue and I did not get any output files. I am really confused about this problem. I would appreciate if you could give me some suggestions.
The amount of memory allocated appears to be insufficient for the combination of pipeline and data. Allocating more memory, reducing the size of the input data, providing some preprocessed data or adjusting the pipeline to avoid unwanted memory-hungry steps are all potential solutions. See above for some further explanation.
I am wondering if I could just get ALFF and f/ALFF derivatives without doing any preprocessing using C-PAC. I think only if you have already preprocessed data in BIDS format and provide that directory to C-PAC.
This error is due to a pipeline configuration that requires data that are not provided and will not be generated by the given configuration. Either preprocessed data must be provided (for this, I believe BIDS format is the only way C-PAC can currently comprehend preprocessed data) or the preprocessing steps that feed into FALFF must be turned on.

Jon Clucas, MIS

unread,
Jun 22, 2021, 12:29:23 PM6/22/21
to cpax_forum
…aaaand Google Groups changed the table formatting.

Here's another attempt at clarity.

Response 1: The amount of memory allocated appears to be insufficient for the combination of pipeline and data. Allocating more memory, reducing the size of the input data, providing some preprocessed data or adjusting the pipeline to avoid unwanted memory-hungry steps are all potential solutions. See above for some further explanation.

Response 1 applies to questions 1 ‒ 5.

Response 2: I think only if you have already preprocessed data in BIDS format and provide that directory to C-PAC.

Response 2 applies to question 6.

Response 3: This error is due to a pipeline configuration that requires data that are not provided and will not be generated by the given configuration. Either preprocessed data must be provided (for this, I believe BIDS format is the only way C-PAC can currently comprehend preprocessed data) or the preprocessing steps that feed into FALFF must be turned on.

Response 3 applies to question 7.

Moyan Li

unread,
Jun 22, 2021, 7:32:30 PM6/22/21
to cpax_forum
Hi Jon,

Thanks a lot for your help! It seems that the main reason that I cannot run the default pipeline is because of the insufficient memory. 

1. 
I just tried to set --mem_gb 10 but I still got the error below and the workflow was killed.
####################
[Node] Error on "cpac_100206_ses-1.func_reorient_87" (/tmp/cpac_100206_ses-1/_scan_func-1/func_reorient_87)
210622-19:06:52,193 nipype.workflow ERROR:
Node func_reorient_87.a0 failed to run on host gl-login2.arc-ts.umich.edu.
#####################
I attached the log file for your reference. I assume that it is still because of the memory problem but I am not sure about it. I am wondering if you could please take a look and give me some suggestions. Do I still need to have more memory allocation?

2. 
Another small question is that in your Response 3, you said "or the preprocessing steps that feed into FALFF must be turned on". Does it mean that if my data is not in the BIDS format, I need to turn on all of the preprocessing steps as provided in the default pipeline before I feed the data into FALFF? Will that cause problem if my data has been preprocessed before I use C-PAC?

Best,
Moyan

off_slice.zip

Jon Clucas, MIS

unread,
Jun 22, 2021, 8:38:45 PM6/22/21
to cpax_forum
1. Yeah, 

RuntimeError: Command:
3dresample -orient RPI -prefix rfMRI_REST1_RL_calc_resample.nii.gz -inset /tmp/cpac_100206_ses-1/_scan_func-1/func_reorient_87/rfMRI_REST1_RL_calc.nii.gz
Standard output:

Standard error:
Killed
Return code: 137

means 3dresample ran out of memory. I'm surprised 10 GB isn't enough. I wonder if you're really getting as much as you're asking for. Right at the beginning of a C-PAC run, before C-PAC starts logging, it prints out some runtime information including a line 

Available memory: {0} (GB)

with {0} filled in with the available memory. If you're getting something other than what you specified there, the parameter isn't being set. 

Another possibility is C-PAC is trying to allocate more memory than you actually have available. If you're running locally on *nix, you can run 

free --giga

to see how much RAM you actually have available to use. If you're running on a server, you might have to ask an admin to increase your RAM allowance.

2. You don't need all the preprocessing steps, just the ones that generate the inputs FALFF needs that you aren't already providing, so 

    • at least one of {"desc-cleaned_bold", "desc-brain_bold", "desc-preproc_bold", "bold"}
    and
    • "space-bold_desc-brain_mask"

You've already got a bold file (your original functional input would work for that input, if that's what you're wanting to use), so you'd just need something to generate space-bold_desc-brain_mask. For example, distortion correction in C-PAC would generate that file, but that step has its own required inputs. So you'd have to provide those inputs or turn on steps to generate them, and so on up the chain until you only have steps with inputs that you provide or that will be generated.

Looking again at the error message you provided, coregistration is actually the step that is complaining about missing inputs. (For example, distortion correction would provide desc-mean_bold, and/or functional preprocessing would provide desc-brain_bold).

Again, you don't need to turn everything on, just the things that you need to get from where you're starting to where you're trying to go.

Moyan Li

unread,
Jun 22, 2021, 10:21:24 PM6/22/21
to cpax_forum
Hi Jon, 

Thanks for your explanation! 

1. 
I checked my runtime information, it does include a line:

Available memory: 50.0 (GB)

when I set --mem_gb 50

I also set maximum_memory_per_participant = 50 and max_cores_per_participant = 20 in my pipeline_config file.

I am running on a server and when I run free --giga, I got the result below.
memory.jpeg
It seems that the memory available is 172GB. I will double check with the admin to see what is my RAM allowance.

2. I see your point. I am also wondering if there is any document that explain what those terminologies such as {"desc-cleaned_bold", "desc-brain_bold", "desc-preproc_bold", "bold"}, "space-bold_desc-brain_mask" and "{epi_1", "epi_1_scan_params", "epi_2","epi_2_scan_params","pe_direction}" in the input for distortion correction stand for. Especially I am not sure how I can get those inputs such as "epi_1" and "epi_2".

Thanks again for your help!

Best,
Moyan

Jon Clucas, MIS

unread,
Jun 23, 2021, 12:53:18 PM6/23/21
to cpax_forum
1. 🤔 Have your attempts all been for the same subject? I wonder if there's something particular about that data that is leading to excessive memory usage for that step. I assume it's still crashing or hanging in the same place as before when you set --mem_gb 50?

2. The terms themselves are derived from BIDS; these terms in our inputs and outputs are not full BIDS names, but just some number of BIDS entities. In our inputs comments, the structure is a list of required inputs (strings) or of preference-ordered input options (lists). That is, where you see a list-in-a-list in the inputs, you need at least one of the strings inside the inner list to exist when that node runs. If more than one exists, the first one that exists is the one that is used.

Unfortunately we don't have a document yet that lays out which nodes require / generate which inputs / outputs. One of the improvements in 1.8 over previous versions is the structured comment blocks that spell out the inputs and outputs for each node; we intend for an upcoming documentation update to use those comment blocks to automatically fill in that information in our documentation. Until then, I think the best approach to finding which nodes output which resources is to search our source code

epi_1 and epi_2 both come from loading fieldmaps from the functional data.

I should clarify, above when I mentioned distortion correction, that was an example of a node that outputs a space-bold_desc-brain_mask; other examples include functional masking using AFNI and functional masking using FSL. Again, you don't need all of these turned on, just enough of them to get from your inputs to your desired outputs in a way that makes sense for your goals.

Jon Clucas, MIS

unread,
Jun 23, 2021, 1:02:59 PM6/23/21
to cpax_forum
Hopefully the flowcharts in our documentation can be at least somewhat helpful in tracing the steps you need:

pipeline-individual.png

functional.png

Moyan Li

unread,
Jun 23, 2021, 1:26:25 PM6/23/21
to cpax_forum
Hi Jon,

Yes, all of my attempts have all been for the same subject. As you said, it's still crashing or hanging in the same place as below when running 3dresample.

Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 67, in run_node
    result["result"] = node.run(updatehash=updatehash)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 516, in run
    result = self._run_interface(execute=True)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 635, in _run_interface
    return self._run_command(execute)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 741, in _run_command
    result = self._interface.run(cwd=outdir)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 419, in run
    runtime = self._run_interface(runtime)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/afni/base.py", line 125, in _run_interface
    runtime, correct_return_codes
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 814, in _run_interface
    self.raise_exception(runtime)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 745, in raise_exception
    ).format(**runtime.dictcopy())
RuntimeError: Command:
3dresample -orient RPI -prefix rfMRI_REST1_RL_calc_resample.nii.gz -inset /tmp/cpac_100206_ses-1/_scan_func-1/func_reorient_87/rfMRI_REST1_RL_calc.nii.gz
Standard output:

Standard error:
Killed
Return code: 137

My data is from HCP dataset. The functional template I am using is a time series 4D NIFTI file with dimensional 91*109*91*1200 and the anotiomical template is a 3D NIFTI file with dimension 91*109*91. The resolution is 2mm. Basically I am trying to covert that 4D time series data (91*109*91*1200) into a 3D volumetric data (91*109*91) based on fALFF. I am not sure why it always leads to excessive memory usage in that 3dresample step even if I have already set maximum_memory_per_participant: 10 in the pipeline_config file.


Best,
Moyan
Reply all
Reply to author
Forward
0 new messages