Hello,
I am using C-PAC 1.6.2 and unfortunately I still have a few problems where I don't know what to do. First of all the program spits out several SameFile Errors, it seems that this refers to the same named files in the working and output directory, but these are created by C-PAC itself and therefore I cannot influence them. Below is an excerpt of the error message:
Traceback (most recent call last):
File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/utils/filemanip.py", line 472, in copyfile
shutil.copyfile(originalfile, newfile)
File "/usr/local/miniconda/lib/python3.7/shutil.py", line 104, in copyfile
raise SameFileError("{!r} and {!r} are the same file".format(src, dst))
shutil.SameFileError: '/output/working/resting_preproc_sub-C002JOBA_ses-1/anat_preproc_afni_0/anat_reorient/sub-C002JOBA_T1w_resample.nii.gz' and '/output/output/pipeline_Pipeline_new_freq-filter_nuisance/sub-C002JOBA_ses-1/anatomical_reorient/sub-C002JOBA_T1w_resample.nii.gz' are the same file
My second problem and I think that's the reason why the processing of the data breaks down: Files created by C-PAC contain only 299 time points, my input was a file with 300 volumes (conversion with dcm2nii):
Standard error:
++ 3dTcorr1D: AFNI version=AFNI_20.1.01 (Apr 14 2020) [64-bit]
+ reading dataset file /output/working/resting_preproc_sub-C002JOBA_ses-1/sca_roi_1/_scan_rest/_selector_CSF-2mmE-M_aC-CSF+WM-2mm-DPC5_M-SDB_P-2_BP-B0.01-T0.1/_mask_striatum_mask_file_..Data_preproc..Masks..striatum.nii.gz/3dTCorr1D/bandpassed_demeaned_filtered.nii.gz
+ reading 1D file /output/working/resting_preproc_sub-C002JOBA_ses-1/roi_timeseries_for_sca_1/_scan_rest/_selector_CSF-2mmE-M_aC-CSF+WM-2mm-DPC5_M-SDB_P-2_BP-B0.01-T0.1/_mask_striatum_mask_file_..Data_preproc..Masks..striatum.nii.gz/clean_roi_csv/roi_stats.csv
[7m** FATAL ERROR:[0m 1D file /output/working/resting_preproc_sub-C002JOBA_ses-1/roi_timeseries_for_sca_1/_scan_rest/_selector_CSF-2mmE-M_aC-CSF+WM-2mm-DPC5_M-SDB_P-2_BP-B0.01-T0.1/_mask_striatum_mask_file_..Data_preproc..Masks..striatum.nii.gz/clean_roi_csv/roi_stats.csv has 299 time points, but dataset has 300 values
I have attached my log file and the pipeline I used. I would be very happy if someone has ideas and could help me with my problems.
docker pull fcpindi/c-pac:nightlysingularity pull docker://fcpindi/c-pac:nightlyFor future reference, we had a similar issue on one of our servers recently (the same "Cannot obtain lock" error), which turned out to be a relatively straightforward fix. server1 was trying to run a pipeline and save outputs to an NFS share hosted by server2, which we had recently upgraded. It turns out that the /etc/fstab on server1 specified the NFS version in the options columns, and was mounting NFS 4 shares as NFS 3 shares. We were able to fix this by replacingnfsvers=3withnfsvers=4in the options column for the affected volumes.