dMRI preprocessing

618 views
Skip to first unread message

Yael Coldham

unread,
Jul 3, 2025, 9:46:27 AMJul 3
to HCP-Users, יעל אקב
Hi,

We’ve been trying to run the diffusion preprocessing pipeline, but we consistently encounter errors during execution.
All previous steps in the HCP pipeline (PreFreeSurfer, FreeSurfer, and PostFreeSurfer) completed successfully.

The DWI data we use was acquired using these protocols:
(1) dMRI_MB4_185dirs_d15D45_AP
(2) dMRI_MB4_6dirs_d15D45_PA
(3) two SBRef scans

We use FSL version is 6.0.7.1.

From the log files, it appears that the PreEddy step ran successfully , but the pipeline fails during run_eddy.sh .
This is the section of the log where things begin to go wrong:

We've been told that we might need AP and PA files with the same number of volumes in order for this to run successfully, but we have been able to run this in the past on a similar dataset with the same number of AP and PA volumes.

What could be the problem here?

Thanks in advance!

Harms, Michael

unread,
Jul 3, 2025, 9:52:50 AMJul 3
to hcp-...@humanconnectome.org, יעל אקב

Hi,

That snapshot is unreadable.  Please attach the full log of the output (containing both call and error).

 

You need to run the pipeline in a mode that does not attempt to "pair up" the 185dirs and 6dirs scan.

But assuming that you have b=0 volumes collected in both scans, your acquisition can be accommodated in the processing.

 

Cheers,

-MH

 

-- 

Michael Harms, Ph.D.

-----------------------------------------------------------

Professor of Psychiatry

Washington University School of Medicine

Department of Psychiatry, Box 8134

660 South Euclid Ave.                        Tel: 314-747-6173

St. Louis, MO  63110                          Email: mha...@wustl.edu

--
You received this message because you are subscribed to the Google Groups "HCP-Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hcp-users+...@humanconnectome.org.
To view this discussion visit https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/f8830c3c-33e6-40d0-b265-defd711bc258n%40humanconnectome.org.

 


The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail.

Harms, Michael

unread,
Jul 3, 2025, 10:28:15 AMJul 3
to יעל אקב, Yael Shavit Coldham, hcp-...@humanconnectome.org

You don't have pairs of AP/PA volumes acquired with the same diffusion direction, so you need to use the mode "--combine-data-flag=2"

 

Also, you might want to consider using the --select-best-b0 option.

 

Cheers,

-MH

 

 

-- 

Michael Harms, Ph.D.

-----------------------------------------------------------

Professor of Psychiatry

Washington University School of Medicine

Department of Psychiatry, Box 8134

660 South Euclid Ave.                        Tel: 314-747-6173

St. Louis, MO  63110                          Email: mha...@wustl.edu

 

From: יעל אקב <yael...@gmail.com>
Date: Thursday, July 3, 2025 at 9:11 AM
To: "Harms, Michael" <mha...@wustl.edu>, Yael Shavit Coldham <yael....@gmail.com>
Cc: "hcp-...@humanconnectome.org" <hcp-...@humanconnectome.org>
Subject: Re: [hcp-users] dMRI preprocessing

 

 

Hi,

 

 

We've tried to run the hcp_diffusion command both with --hcp_nogpu flag and without it.

 

When running this commend (with GPU):

qunex_container hcp_diffusion -hcp_nogpu --sessions=“${SESSIONS}” --mappingfile=“${INPUT_MAPPING_FILE}” --container=“${QUNEX_CONTAINER}” --dockeropt=“-v ${BIND_FOLDER}:${BIND_FOLDER}” --sessionsfolder=“${STUDY_FOLDER}/sessions” --batchfile=“${STUDY_FOLDER}/processing/batch.txt” --hcp_dwi_negdata=“10002_DWI_dir6_PA.nii.gz” --hcp_dwi_posdata=“10002_DWI_dir185_AP.nii.gz”

When running this commend (no GPU):

qunex_container hcp_diffusion --sessions=“${SESSIONS}” --mappingfile=“${INPUT_MAPPING_FILE}” --container=“${QUNEX_CONTAINER}” --dockeropt=“-v ${BIND_FOLDER}:${BIND_FOLDER}” --sessionsfolder=“${STUDY_FOLDER}/sessions” --batchfile=“${STUDY_FOLDER}/processing/batch.txt” --hcp_dwi_negdata=“10001_DWI_dir6_PA.nii.gz” --hcp_dwi_posdata=“10001_DWI_dir185_AP.nii.gz”

 

The full logs output files are attached.

Thanks in advance!

 

 

 

 

בתאריך יום ה׳, 3 ביולי 2025 ב-16:52 מאת ‪Harms, Michael <‪mha...@wustl.edu‏>:

We've been told that we might need AP and PA files with the same number of volumes in order for this to run successfully, but we have been able to run this in the past on a similar dataset with the same number of AP and PA volumes.

 

What could be the problem here?

Thanks in advance!

--
You received this message because you are subscribed to the Google Groups "HCP-Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hcp-users+...@humanconnectome.org.
To view this discussion visit https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/f8830c3c-33e6-40d0-b265-defd711bc258n%40humanconnectome.org.

 


The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail.

Harms, Michael

unread,
Jul 3, 2025, 10:29:45 AMJul 3
to hcp-...@humanconnectome.org, יעל אקב, Yael Shavit Coldham

 

Those are the flags to the HCP pipeline.  You'll need to translate them into their QuNex equivalent.

 

-- 

Michael Harms, Ph.D.

-----------------------------------------------------------

Professor of Psychiatry

Washington University School of Medicine

Department of Psychiatry, Box 8134

660 South Euclid Ave.                        Tel: 314-747-6173

St. Louis, MO  63110                          Email: mha...@wustl.edu

 

יעל אקב

unread,
Jul 3, 2025, 4:48:48 PMJul 3
to Harms, Michael, Yael Shavit Coldham, hcp-...@humanconnectome.org

Hi,


We've tried to run the hcp_diffusion command both with --hcp_nogpu flag and without it.

When running this commend (with GPU):

qunex_container hcp_diffusion -hcp_nogpu --sessions=“${SESSIONS}” --mappingfile=“${INPUT_MAPPING_FILE}” --container=“${QUNEX_CONTAINER}” --dockeropt=“-v ${BIND_FOLDER}:${BIND_FOLDER}” --sessionsfolder=“${STUDY_FOLDER}/sessions” --batchfile=“${STUDY_FOLDER}/processing/batch.txt” --hcp_dwi_negdata=“10002_DWI_dir6_PA.nii.gz” --hcp_dwi_posdata=“10002_DWI_dir185_AP.nii.gz”

When running this commend (no GPU):

qunex_container hcp_diffusion --sessions=“${SESSIONS}” --mappingfile=“${INPUT_MAPPING_FILE}” --container=“${QUNEX_CONTAINER}” --dockeropt=“-v ${BIND_FOLDER}:${BIND_FOLDER}” --sessionsfolder=“${STUDY_FOLDER}/sessions” --batchfile=“${STUDY_FOLDER}/processing/batch.txt” --hcp_dwi_negdata=“10001_DWI_dir6_PA.nii.gz” --hcp_dwi_posdata=“10001_DWI_dir185_AP.nii.gz”


The full logs output files are attached.

Thanks in advance!





‫בתאריך יום ה׳, 3 ביולי 2025 ב-16:52 מאת ‪Harms, Michael‬‏ <‪mha...@wustl.edu‬‏>:‬
error(no_gpu)_hcp_diffusion_10001_2025-06-30_09.26.08.181841.log
error_hcp_diffusion_10002_2025-06-30_09.27.31.191653.log

Jure Demsar

unread,
Jul 7, 2025, 7:10:29 AMJul 7
to HCP-Users, יעל אקב, hcp-...@humanconnectome.org, mha...@wustl.edu, yael....@gmail.com
Hi Mike,

The user added relevant flags to the QuNex command and is now getting the following HCP Pipelines call

/opt/HCP/HCPpipelines/DiffusionPreprocessing/DiffPreprocPipeline.sh \
--path="/home/docker/volumes/hcppipelines/yael_practice/PTSD_BB_study/sessions/10001/hcp" \
--subject="10001" \
--PEdir=2 \
--posData="/home/docker/volumes/hcppipelines/yael_practice/PTSD_BB_study/sessions/10001/hcp/10001/unprocessed/Diffusion/10001_DWI_dir185_AP.nii.gz@EMPTY" \
--negData="EMPTY@/home/docker/volumes/hcppipelines/yael_practice/PTSD_BB_study/sessions/10001/hcp/10001/unprocessed/Diffusion/10001_DWI_dir6_PA.nii.gz" \
--echospacing="0.689998" \
--gdcoeffs="NONE" \
--dof="6" \
--b0maxbval="50" \
--combine-data-flag="2" \
--printcom="" \
--select-best-b0 \
--cuda-version=10.2

I think this looks fine? Note that they are still getting the missing pairs error.

Mon Jul 7 06:39:26 EDT 2025:DiffPreprocPipeline_PreEddy.sh: ERROR: Wrong Input! No pairs of phase encoding directions have been found!
Mon Jul 7 06:39:26 EDT 2025:DiffPreprocPipeline_PreEddy.sh: ERROR: Wrong Input! No pairs of phase encoding directions have been found!
Mon Jul 7 06:39:26 EDT 2025:DiffPreprocPipeline_PreEddy.sh: ERROR: At least one pair is needed!
Mon Jul 7 06:39:26 EDT 2025:DiffPreprocPipeline_PreEddy.sh: ERROR: At least one pair is needed!

I though that "pairless" processing cannot be done with HCP Pipelines. But based on your discussion above I might be wrong. If that is the case I think this error messaging needs to be tweaked. Note that the user is seeing some other errors before this one as well. Since the QuNex => HCP call seems fine, and I am unable to quickly figure this out on my own I am also posting this here.

Image Exception : #63 :: No image files match: /home/docker/volumes/hcppipelines/yael_practice/PTSD_BB_study/sessions/10001/hcp/10001/Diffusion/topup/Pos_b0
No image files match: /home/docker/volumes/hcppipelines/yael_practice/PTSD_BB_study/sessions/10001/hcp/10001/Diffusion/topup/Pos_b0
/opt/HCP/HCPpipelines/DiffusionPreprocessing/DiffPreprocPipeline_PreEddy.sh: line 166: % 2: syntax error: operand expected (error token is "% 2")
/opt/HCP/HCPpipelines/DiffusionPreprocessing/DiffPreprocPipeline_PreEddy.sh: line 480: [: -eq: unary operator expected

Attached is also the full log of the command.

Best, Jure
error_hcp_diffusion_10001_2025-07-07_06.39.22.822482.log

Glasser, Matthew

unread,
Jul 7, 2025, 7:23:55 AMJul 7
to hcp-...@humanconnectome.org, יעל אקב, Harms, Michael, yael....@gmail.com

We should support having only b0s paired, as this is not that uncommon.


Matt.

 

From: Jure Demsar <demsa...@gmail.com>


Reply-To: "hcp-...@humanconnectome.org" <hcp-...@humanconnectome.org>
Date: Monday, July 7, 2025 at 6:10 AM
To: HCP-Users <hcp-...@humanconnectome.org>

Cc: יעל אקב <yael...@gmail.com>, "hcp-...@humanconnectome.org" <hcp-...@humanconnectome.org>, "Harms, Michael" <mha...@wustl.edu>, "yael....@gmail.com" <yael....@gmail.com>
Subject: Re: [hcp-users] dMRI preprocessing

 

Hi Mike,

We've been told that we might need AP and PA files with the same number of volumes in order for this to run successfully, but we have been able to run this in the past on a similar dataset with the same number of AP and PA volumes.

 

What could be the problem here?

Thanks in advance!

--
You received this message because you are subscribed to the Google Groups "HCP-Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hcp-users+...@humanconnectome.org.
To view this discussion visit https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/f8830c3c-33e6-40d0-b265-defd711bc258n%40humanconnectome.org.

 


The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail.

--
You received this message because you are subscribed to the Google Groups "HCP-Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hcp-users+...@humanconnectome.org.

Harms, Michael

unread,
Jul 7, 2025, 12:05:29 PMJul 7
to Glasser, Matthew, hcp-...@humanconnectome.org, יעל אקב, yael....@gmail.com

Please first confirm whether the problem exists when using an up-to-date version of the QuNex container.  Some of the diffusion-related HCPpipelines code was fixed back in May 2024.

 

thx

 

-- 

Michael Harms, Ph.D.

-----------------------------------------------------------

Professor of Psychiatry

Washington University School of Medicine

Department of Psychiatry, Box 8134

660 South Euclid Ave.                        Tel: 314-747-6173

St. Louis, MO  63110                          Email: mha...@wustl.edu

 

Glasser, Matthew

unread,
Jul 7, 2025, 12:14:39 PMJul 7
to Harms, Michael, hcp-...@humanconnectome.org, יעל אקב, yael....@gmail.com

Perhaps this is an issue:

 

--posData="/home/docker/volumes/hcppipelines/yael_practice/PTSD_BB_study/sessions/10001/hcp/10001/unprocessed/Diffusion/10001_DWI_dir185_AP.nii.gz@EMPTY" \

--negData=EMPTY@/home/docker/volumes/hcppipelines/yael_practice/PTSD_BB_study/sessions/10001/hcp/10001/unprocessed/Diffusion/10001_DWI_dir6_PA.nii.gz \

 

If instead it was supposed to not have the EMPTYs.


Matt.

Harms, Michael

unread,
Jul 7, 2025, 12:17:28 PMJul 7
to Glasser, Matthew, hcp-...@humanconnectome.org, יעל אקב, yael....@gmail.com

 

No.  That's the correct syntax.  The use of "EMPTY" at different positions in the pos/neg data inputs is precisely how we tell the code that the inputs in the same position are not to be paired.

Yael Coldham

unread,
Jul 8, 2025, 1:49:04 AMJul 8
to HCP-Users, mha...@wustl.edu, יעל אקב, Yael Coldham, glas...@wustl.edu
Thank you. 

So, the EMPTY is correct for the posData and negData paths?
Any other changes that you recommend that we do running the command to make it work (including TOPUP correction)?

We are using Qunex container version 0.98.1.

Thanks again.

Jure Demsar

unread,
Jul 8, 2025, 6:13:02 AMJul 8
to HCP-Users, yael....@gmail.com, mha...@wustl.edu, יעל אקב, glas...@wustl.edu
Yes, the insertion of EMPTY strings is correct.

Please try with QuNex 1.2.2: wget --show-progress -O qunex_suite-1.2.2.sif 'https://jd.mblab.si/qunex/qunex_suite-1.2.2.sif'

And report the results in the topic you opened on the QuNex forum.

Best, Jure

Harms, Michael

unread,
Jul 8, 2025, 11:05:16 AMJul 8
to Yael Coldham, HCP-Users, יעל אקב, Glasser, Matthew

 

Yes, I believe that the 'EMPTY' entries are correct.

 

Please try running with the latest public version of the QuNex container.

 

thx

 

-- 

Michael Harms, Ph.D.

-----------------------------------------------------------

Professor of Psychiatry

Washington University School of Medicine

Department of Psychiatry, Box 8134

660 South Euclid Ave.                        Tel: 314-747-6173

St. Louis, MO  63110                          Email: mha...@wustl.edu

 

We've been told that we might need AP and PA files with the same number of volumes in order for this to run successfully, but we have been able to run this in the past on a similar dataset with the same number of AP and PA volumes.

 

What could be the problem here?

Thanks in advance!

--
You received this message because you are subscribed to the Google Groups "HCP-Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hcp-users+...@humanconnectome.org.
To view this discussion visit https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/f8830c3c-33e6-40d0-b265-defd711bc258n%40humanconnectome.org.

 


The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail.

--
You received this message because you are subscribed to the Google Groups "HCP-Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hcp-users+...@humanconnectome.org.
To view this discussion visit https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/d69f0945-923e-436d-b3a8-c1edac32e0abn%40humanconnectome.org.

 


The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail.

Harms, Michael

unread,
Jul 8, 2025, 4:40:20 PMJul 8
to Yael Coldham, HCP-Users, יעל אקב, Glasser, Matthew, Demšar, Jure, Harms, Michael

 

Actually, looking at the code more closely, while *conceptually* use of "EMPTY" is how your situation should be handled (since you don't have a pair of dir185 scans with AP and PA polarity with the same set of diffusion directions), to get the current code to work, it may be necessary to pretend that your dir185_AP and dir6_PA scans are indeed a "pair", and eliminate the use of "EMPTY" in the posData and negData inputs.

 

If you are launching this using hcp_diffusion in QuNex, you may need to override the default QuNex behavior and explicitly set:

--hcp_dwi_posdata=10001_DWI_dir185_AP.nii.gz

--hcp_dwi_negdata=10001_DWI_dir6_PA.nii.gz

so that you don't get the "EMPTY"s included.

 

Give that a try.  If that doesn't work, there may still be a separate issue where you need a newer version of the container to pick up changes to the HCPpipelines code from 2024.

 

Cheers,

-MH

 

-- 

Michael Harms, Ph.D.

-----------------------------------------------------------

Professor of Psychiatry

Washington University School of Medicine

Department of Psychiatry, Box 8134

660 South Euclid Ave.                        Tel: 314-747-6173

St. Louis, MO  63110                          Email: mha...@wustl.edu

 

Yael Coldham

unread,
Jul 9, 2025, 4:38:22 AMJul 9
to HCP-Users, mha...@wustl.edu, יעל אקב, glas...@wustl.edu, Demšar, Jure, Yael Coldham
Great, we will try running with the new version (1.3.0, or should we try 1.2.1?), and will specifically set:

--hcp_dwi_posdata=10001_DWI_dir185_AP.nii.gz

--hcp_dwi_negdata=10001_DWI_dir6_PA.nii.gz

And will let you know if it worked. 
Thank again, everyone. 

יעל אקב

unread,
Jul 15, 2025, 3:37:38 AMJul 15
to Yael Coldham, HCP-Users, mha...@wustl.edu, glas...@wustl.edu, Demšar, Jure

Hi,

We attempted to run the hcp_diffusion module again using the updated QuNex version, but unfortunately, the issue persists.

Below is the command we used:

image.png

I’ve attached the log file showing the error. Any guidance would be appreciated!

Best regards,

Yael


‫בתאריך יום ד׳, 9 ביולי 2025 ב-11:38 מאת ‪Yael Coldham‬‏ <‪yael....@gmail.com‬‏>:‬
error_hcp_diffusion_10001_2025-07-14_14.24.23.730611.log

Jure Demsar

unread,
Jul 15, 2025, 3:49:24 AMJul 15
to HCP-Users, יעל אקב, HCP-Users, mha...@wustl.edu, glas...@wustl.edu, Demšar, Jure, yael....@gmail.com
Hi,

the error is quite verbose:

EDDY::EddyCudaHelperFunctions::InitGpu: cudaGetDevice returned an error: cudaError_t = 35, cudaErrorName = cudaErrorInsufficientDriver, cudaErrorString = CUDA driver version is insufficient for CUDA runtime version
EDDY::cuda/EddyCudaHelperFunctions.cu:::  static void EDDY::EddyCudaHelperFunctions::InitGpu(bool):  Exception thrown
EDDY::cuda/EddyGpuUtils.cu:::  static std::shared_ptr<EDDY::DWIPredictionMaker> EDDY::EddyGpuUtils::LoadPredictionMaker(const EDDY::EddyCommandLineOptions&, EDDY::ScanType, const EDDY::ECScanManager&, unsigned int, float, NEWIMAGE::volume<float>&, bool):  Exception thrown
EDDY::eddy.cpp:::  EDDY::ReplacementManager* EDDY::Register(const EDDY::EddyCommandLineOptions&, EDDY::ScanType, unsigned int, const std::vector<float, std::allocator<float> >&, EDDY::SecondLevelECModelType, bool, EDDY::ECScanManager&, EDDY::ReplacementManager*, NEWMAT::Matrix&, NEWMAT::Matrix&):  Exception thrown
EDDY::: Eddy failed with message EDDY::eddy.cpp:::  EDDY::ReplacementManager* EDDY::DoVolumeToVolumeRegistration(const EDDY::EddyCommandLineOptions&, EDDY::ECScanManager&):  Exception thrown


This is an issue with your system, you need to update the CUDA drivers to a version that supports CUDA 12.3 or newer.

Best, Jure

Harms, Michael

unread,
Jul 15, 2025, 12:34:52 PMJul 15
to demsa...@gmail.com, HCP-Users, יעל אקב, Glasser, Matthew, Demšar, Jure, yael....@gmail.com

 

This is unrelated to your CUDA driver issue, but just wanted to call your attention to the set of WARNING messages earlier in the log:

 

Mon Jul 14 14:24:25 UTC 2025:DiffPreprocPipeline.sh: WARNING: Using --select-best-b0 prepends the best b0 to the start of the file passed into eddy.

Mon Jul 14 14:24:26 UTC 2025:DiffPreprocPipeline.sh: WARNING: To ensure eddy succesfully aligns this new first b0 with the actual first volume,

Mon Jul 14 14:24:26 UTC 2025:DiffPreprocPipeline.sh: WARNING: we recommend to increase the FWHM for the first eddy iterations if using --select-best-b0

Mon Jul 14 14:24:26 UTC 2025:DiffPreprocPipeline.sh: WARNING: This can be done by setting the --extra_eddy_args=--fwhm=... flag

 

We typically use:

 

     --extra-eddy-arg=--niter=8

     --extra-eddy-arg=--fwhm=10,8,6,4,2,0,0,0  

 

Cheers,

-MH

 

-- 

Michael Harms, Ph.D.

-----------------------------------------------------------

Professor of Psychiatry

Washington University School of Medicine

Department of Psychiatry, Box 8134

660 South Euclid Ave.                        Tel: 314-747-6173

St. Louis, MO  63110                          Email: mha...@wustl.edu

 

From: Jure Demsar <demsa...@gmail.com>
Date: Tuesday, July 15, 2025 at 2:49 AM
To: HCP-Users <hcp-...@humanconnectome.org>
Cc: יעל אקב <yael...@gmail.com>, HCP-Users <hcp-...@humanconnectome.org>, "Harms, Michael" <mha...@wustl.edu>, "Glasser, Matthew" <glas...@wustl.edu>, Jure Demsar <jure....@ff.uni-lj.si>, "yael....@gmail.com" <yael....@gmail.com>
Subject: Re: [hcp-users] dMRI preprocessing

 

Hi,

 

the error is quite verbose:

EDDY::EddyCudaHelperFunctions::InitGpu: cudaGetDevice returned an error: cudaError_t = 35, cudaErrorName = cudaErrorInsufficientDriver, cudaErrorString = CUDA driver version is insufficient for CUDA runtime version
EDDY::cuda/EddyCudaHelperFunctions.cu:::  static void EDDY::EddyCudaHelperFunctions::InitGpu(bool):  Exception thrown
EDDY::cuda/EddyGpuUtils.cu:::  static std::shared_ptr<EDDY::DWIPredictionMaker> EDDY::EddyGpuUtils::LoadPredictionMaker(const EDDY::EddyCommandLineOptions&, EDDY::ScanType, const EDDY::ECScanManager&, unsigned int, float, NEWIMAGE::volume<float>&, bool):  Exception thrown
EDDY::eddy.cpp:::  EDDY::ReplacementManager* EDDY::Register(const EDDY::EddyCommandLineOptions&, EDDY::ScanType, unsigned int, const std::vector<float, std::allocator<float> >&, EDDY::SecondLevelECModelType, bool, EDDY::ECScanManager&, EDDY::ReplacementManager*, NEWMAT::Matrix&, NEWMAT::Matrix&):  Exception thrown
EDDY::: Eddy failed with message EDDY::eddy.cpp:::  EDDY::ReplacementManager* EDDY::DoVolumeToVolumeRegistration(const EDDY::EddyCommandLineOptions&, EDDY::ECScanManager&):  Exception thrown

This is an issue with your system, you need to update the CUDA drivers to a version that supports CUDA 12.3 or newer.

 

Best, Jure

On Tuesday, 15 July 2025 at 09:37:38 UTC+2 יעל אקב wrote:

Hi,

We attempted to run the hcp_diffusion module again using the updated QuNex version, but unfortunately, the issue persists.

Below is the command we used:

Annchen Knodt

unread,
Aug 28, 2025, 10:04:35 AM (10 days ago) Aug 28
to HCP-Users, mha...@wustl.edu, יעל אקב, glas...@wustl.edu, Demšar, Jure, yael....@gmail.com, demsa...@gmail.com
Hello,

I am running into the same question and wanted to see if a solution has been found.  I have multi-shell diffusion data with 139 diffusion-weighted directions and 8 b0 volumes with AP phase encoding, plus 3 b0 volumes with PA encoding.  I'm trying to run hcp_diffusion in qunex 1.2.2 and getting the same error.

Pasting the command I'm running below, and attaching the error log.  Any guidance is much appreciated, thanks!

projectname=test_sessions
ID=0223
qdir=/vwork/ark19/qunex/      
conname=qunex_suite-1.2.2.sif
con=$qdir/$conname


qunex_container run_turnkey \
    --container="${con}" \
    --bind="${qdir}" \
    --dataformat="BIDS" \
    --paramfile="$qdir/$projectname/sessions/specs/parameters.txt" \
    --mappingfile="$qdir/$projectname/sessions/specs/hcp_mapping.txt" \
    --workingdir="${qdir}" \
    --projectname="$projectname" \
    --path="${qdir}/$projectname" \
    --sessionsfoldername="sessions" \
    --sessions="$ID" \
    --hcp_dwi_posdata="dir139_AP" \
    --hcp_dwi_negdata="dir0_PA" \
    --turnkeytype="local" \
    --turnkeysteps="create_session_info,setup_hcp,create_batch,hcp_diffusion" \
    --scheduler="SLURM,jobname=qunex_turnkey,time=48:00:00,cpus-per-task=1,gres=gpu:1,partition=gpu-common"

Glasser, Matthew

unread,
Aug 28, 2025, 10:06:30 AM (10 days ago) Aug 28
to Annchen Knodt, HCP-Users, Harms, Michael, יעל אקב, Demšar, Jure, yael....@gmail.com, demsa...@gmail.com

I don’t see anything in that error log file. 

 

Matt.

Image removed by sender.

Annchen Knodt

unread,
Aug 28, 2025, 10:10:48 AM (10 days ago) Aug 28
to HCP-Users, glas...@wustl.edu, mha...@wustl.edu, יעל אקב, Demšar, Jure, yael....@gmail.com, demsa...@gmail.com, Annchen Knodt
My apologies, I forgot to ask dropbox to download the contents of the file.  Hopefully this works now, thanks!
error_hcp_diffusion_0223_2025-08-27_16.58.59.077893.log

Harms, Michael

unread,
Aug 28, 2025, 10:19:02 AM (10 days ago) Aug 28
to Annchen Knodt, HCP-Users, Glasser, Matthew, יעל אקב, Demšar, Jure, yael....@gmail.com, demsa...@gmail.com

 

Please see my email from July 8th in this thread, pasted here for convenience:

 

Actually, looking at the code more closely, while *conceptually* use of "EMPTY" is how your situation should be handled (since you don't have a pair of dir185 scans with AP and PA polarity with the same set of diffusion directions), to get the current code to work, it may be necessary to pretend that your dir185_AP and dir6_PA scans are indeed a "pair", and eliminate the use of "EMPTY" in the posData and negData inputs.

 

If you are launching this using hcp_diffusion in QuNex, you may need to override the default QuNex behavior and explicitly set:

--hcp_dwi_posdata=10001_DWI_dir185_AP.nii.gz

--hcp_dwi_negdata=10001_DWI_dir6_PA.nii.gz

so that you don't get the "EMPTY"s included.

 

Cheers,

-MH

 

-- 

Michael Harms, Ph.D.

-----------------------------------------------------------

Professor of Psychiatry

Washington University School of Medicine

Department of Psychiatry, Box 8134

660 South Euclid Ave.                        Tel: 314-747-6173

St. Louis, MO  63110                          Email: mha...@wustl.edu

 

Error! Filename not specified.

Glasser, Matthew

unread,
Aug 28, 2025, 10:20:06 AM (10 days ago) Aug 28
to Harms, Michael, Annchen Knodt, HCP-Users, יעל אקב, Demšar, Jure, yael....@gmail.com, demsa...@gmail.com

Is this something that QuNex could handle properly by default or is this an HCP Pipelines bug?

Matt.

Annchen Knodt

unread,
Aug 28, 2025, 10:42:38 AM (10 days ago) Aug 28
to HCP-Users, glas...@wustl.edu, יעל אקב, Demšar, Jure, yael....@gmail.com, demsa...@gmail.com, mha...@wustl.edu, Annchen Knodt
Michael - thanks yes I did find that earlier comment you made quite helpful, and as such I included these lines in my qunex command I pasted above:
    --hcp_dwi_posdata="dir139_AP" \
    --hcp_dwi_negdata="dir0_PA" \

Where the names correspond to these two lines of my hcp_mapping.txt file:
dwi acq-AP => DWI:dir139_AP
dwi acq-PA => DWI:dir0_PA


[I also just tried running it with .nii.gz included in the --hcp_dwi_* args, to exactly mirror your suggestion, and got the same result]  I am not sure if I have one or both of these items specified incorrectly, or if i'm missing something else.

Harms, Michael

unread,
Aug 28, 2025, 11:17:25 AM (10 days ago) Aug 28
to Annchen Knodt, HCP-Users, Glasser, Matthew, יעל אקב, Demšar, Jure, yael....@gmail.com, demsa...@gmail.com

 

If you look at the actual call to DiffPreProcPipeline.sh (at the very top of the error log), it needs to not contain "EMPTY" in the --negData and --posData strings.

 

I'm not sure how to make that happen in QuNex in the context of how you are trying to launch this (e.g., via run_turnkey).

 

Did you try specifying the actual name of the files? E.g.,

    --hcp_dwi_posdata="0223_DWI_dir139_AP.nii.gz" \
    --hcp_dwi_negdata="0223_DWI_dir0_PA.nii.gz" \

 

I think that might work, although it would specific obviously to just that one subject.

Harms, Michael

unread,
Aug 28, 2025, 11:22:03 AM (10 days ago) Aug 28
to Annchen Knodt, HCP-Users, Glasser, Matthew, יעל אקב, Demšar, Jure, yael....@gmail.com, demsa...@gmail.com

 

As a completely unrelated aside, note that if you don't run the structural preprocessing steps first (or unless you already have), you will eventually get an error in the "PostEddy" stage, where the code brings the dMRI data into the T1w space for the subject.

Message has been deleted

Glasser, Matthew

unread,
Aug 28, 2025, 4:11:06 PM (10 days ago) Aug 28
to Annchen Knodt, HCP-Users, Harms, Michael, יעל אקב, Demšar, Jure, yael....@gmail.com, demsa...@gmail.com

2.5mm is really low resolution.  We run that on babies who can’t hold still in 20-30s to get some kind of diffusion in their clinical MRI scans.

 

Partial volume correction, which we already need to work on for diffusion measures, will really loom large here for cortical analyses.

 

The overall usecase: DWIs in one phase and b0s in both phases should be a supported usecase.  We are having an internal discussion on how to ensure that.


Matt.

 

From: Annchen Knodt <akn...@gmail.com>
Date: Thursday, August 28, 2025 at 1:46 PM
To: HCP-Users <hcp-...@humanconnectome.org>
Cc: "Harms, Michael" <mha...@wustl.edu>, "Glasser, Matthew" <glas...@wustl.edu>, יעל אקב <yael...@gmail.com>, "Demšar, Jure" <jure....@ff.uni-lj.si>, "yael....@gmail.com" <yael....@gmail.com>, "demsa...@gmail.com" <demsa...@gmail.com>, Annchen Knodt <akn...@gmail.com>
Subject: Re: [hcp-users] dMRI preprocessing

 

I've now tried specifying the entire name of the file for the --hcp_dwi_*data args, as well as the full filepath, and it didn't change anything.  I realized that maybe it doesn't even matter what i put there, and sure enough just a completely random string didn't change anything either.  Then it occurred to me that I could try and reformat my dwi data so that i have one file for each of the b0 pairs (AP&PA) and then a separate file for all the unpaired diffusion-weighted volumes (AP). 

WIth this, the lines in my hcp_mapping file look like:
dwi acq-APdir139 => DWI:dir139_AP
dwi acq-APb0 => DWI:b0_AP
dwi acq-PAb0 => DWI:b0_PA


FWIW I put just one b0 volume in the b0 files for testing.  Sure enough, this gets me past the "No pairs of phase encoding directions have been found!" errors, but then I fail pretty quickly again with this message: 
Image Exception : #63 :: No image files match: /vwork/ark19/qunex/test_sessions/sessions/0223/hcp/0223/Diffusion/rawdata/Pos_2_b0_????.nii*

Full log attached.  I will poke around in qunex and experiment with the way I am setting up the call but would love any feedback you're able to provide here as well.

And thanks for the reminder that i need to make sure qunex can see the output from the structural preprocessing i've already run (presumably that's still not my issue, as I haven't gotten past PreEddy). 

As another unrelated aside, the resolution on these data is 2.5mm and i've been wondering if it's even appropriate to be attempting this given these pipelines were originally written for the 1.25mm HCP data.  My primary end goal is getting cortical NODDI estimates a la this 2018 Neuroimage paper, and my understanding is that even though our resolution isn't ideal for cortex, we're likely still better off ultimately mapping to the surface rather than staying in the volume - happy for feedback if i'm way off on this.

Harms, Michael

unread,
Aug 28, 2025, 5:54:23 PM (10 days ago) Aug 28
to Annchen Knodt, HCP-Users, Glasser, Matthew, יעל אקב, Demšar, Jure, yael....@gmail.com, demsa...@gmail.com

 

Hmm, Pos_2_b0*nii should be a NIFTI containing the b=0 volumes in the 2nd file listed in --posData, which would be 0223_DWI_dir139_AP.nii.gz in the latest attempt.

 

Does that NIFTI actually contain any b=0 volumes?

 

 

-- 

Michael Harms, Ph.D.

-----------------------------------------------------------

Professor of Psychiatry

Washington University School of Medicine

Department of Psychiatry, Box 8134

660 South Euclid Ave.                        Tel: 314-747-6173

St. Louis, MO  63110                          Email: mha...@wustl.edu

 

From: Annchen Knodt <akn...@gmail.com>
Date: Thursday, August 28, 2025 at 1:46 PM
To: HCP-Users <hcp-...@humanconnectome.org>
Cc: "Harms, Michael" <mha...@wustl.edu>, "Glasser, Matthew" <glas...@wustl.edu>, יעל אקב <yael...@gmail.com>, Jure Demsar <jure....@ff.uni-lj.si>, "yael....@gmail.com" <yael....@gmail.com>, "demsa...@gmail.com" <demsa...@gmail.com>, Annchen Knodt <akn...@gmail.com>
Subject: Re: [hcp-users] dMRI preprocessing

 

I've now tried specifying the entire name of the file for the --hcp_dwi_*data args, as well as the full filepath, and it didn't change anything.  I realized that maybe it doesn't even matter what i put there, and sure enough just a completely random string didn't change anything either.  Then it occurred to me that I could try and reformat my dwi data so that i have one file for each of the b0 pairs (AP&PA) and then a separate file for all the unpaired diffusion-weighted volumes (AP). 


WIth this, the lines in my hcp_mapping file look like:
dwi acq-APdir139 => DWI:dir139_AP
dwi acq-APb0 => DWI:b0_AP
dwi acq-PAb0 => DWI:b0_PA


FWIW I put just one b0 volume in the b0 files for testing.  Sure enough, this gets me past the "No pairs of phase encoding directions have been found!" errors, but then I fail pretty quickly again with this message: 
Image Exception : #63 :: No image files match: /vwork/ark19/qunex/test_sessions/sessions/0223/hcp/0223/Diffusion/rawdata/Pos_2_b0_????.nii*

Full log attached.  I will poke around in qunex and experiment with the way I am setting up the call but would love any feedback you're able to provide here as well.

And thanks for the reminder that i need to make sure qunex can see the output from the structural preprocessing i've already run (presumably that's still not my issue, as I haven't gotten past PreEddy). 

As another unrelated aside, the resolution on these data is 2.5mm and i've been wondering if it's even appropriate to be attempting this given these pipelines were originally written for the 1.25mm HCP data.  My primary end goal is getting cortical NODDI estimates a la this 2018 Neuroimage paper, and my understanding is that even though our resolution isn't ideal for cortex, we're likely still better off ultimately mapping to the surface rather than staying in the volume - happy for feedback if i'm way off on this.

On Thursday, August 28, 2025 at 11:22:03AM UTC-4 mha...@wustl.edu wrote:

Harms, Michael

unread,
Aug 28, 2025, 5:58:44 PM (10 days ago) Aug 28
to Annchen Knodt, HCP-Users, Glasser, Matthew, יעל אקב, Demšar, Jure, yael....@gmail.com, demsa...@gmail.com

 

Also, do you actually have a b=0 acquired with "AP" polarity, for the 0223_DWI_b0_AP.nii.gz file that you created?  If not, while you might be able to get the code to not error out, the result won't be valid, because it would incorrectly be treating a b=0 volume acquired with PA polarity as if it was acquired as AP polarity.

Harms, Michael

unread,
Aug 28, 2025, 6:06:12 PM (10 days ago) Aug 28
to hcp-...@humanconnectome.org, Annchen Knodt, Glasser, Matthew, יעל אקב, Demšar, Jure, yael....@gmail.com, demsa...@gmail.com

 

Also, as Matt mentioned, we are scoping out a proper fix for this, if you just want to wait a couple weeks…

 

-- 

Michael Harms, Ph.D.

-----------------------------------------------------------

Professor of Psychiatry

Washington University School of Medicine

Department of Psychiatry, Box 8134

660 South Euclid Ave.                        Tel: 314-747-6173

St. Louis, MO  63110                          Email: mha...@wustl.edu

 

Annchen Knodt

unread,
Aug 29, 2025, 1:17:47 AM (9 days ago) Aug 29
to HCP-Users, mha...@wustl.edu, glas...@wustl.edu, יעל אקב, Demšar, Jure, yael....@gmail.com, demsa...@gmail.com, Annchen Knodt
I've now tried specifying the entire name of the file for the --hcp_dwi_*data args, as well as the full filepath, and it didn't change anything.  I realized that maybe it doesn't even matter what i put there, and sure enough just a completely random string didn't change anything either.  Then it occurred to me that I could try and reformat my dwi data so that i have one file for each of the b0 pairs (AP&PA) and then a separate file for all the unpaired diffusion-weighted volumes (AP). 

WIth this, the lines in my hcp_mapping file look like:
dwi acq-APdir139 => DWI:dir139_AP
dwi acq-APb0 => DWI:b0_AP
dwi acq-PAb0 => DWI:b0_PA


FWIW I put just one b0 volume in the b0 files for testing.  Sure enough, this gets me past the "No pairs of phase encoding directions have been found!" errors, but then I fail pretty quickly again with this message: 
Image Exception : #63 :: No image files match: /vwork/ark19/qunex/test_sessions/sessions/0223/hcp/0223/Diffusion/rawdata/Pos_2_b0_????.nii*

Full log attached.  I will poke around in qunex and experiment with the way I am setting up the call but would love any feedback you're able to provide here as well.

And thanks for the reminder that i need to make sure qunex can see the output from the structural preprocessing i've already run (presumably that's still not my issue, as I haven't gotten past PreEddy). 

As another unrelated aside, the resolution on these data is 2.5mm and i've been wondering if it's even appropriate to be attempting this given these pipelines were originally written for the 1.25mm HCP data.  My primary end goal is getting cortical NODDI estimates a la this 2018 Neuroimage paper, and my understanding is that even though our resolution isn't ideal for cortex, we're likely still better off ultimately mapping to the surface rather than staying in the volume - happy for feedback if i'm way off on this.

On Thursday, August 28, 2025 at 11:22:03 AM UTC-4 mha...@wustl.edu wrote:
error_hcp_diffusion_0223_2025-08-28_13.32.42.170055.log
Message has been deleted

Harms, Michael

unread,
Aug 29, 2025, 12:02:08 PM (9 days ago) Aug 29
to demsa...@gmail.com, HCP-Users, akn...@gmail.com, Glasser, Matthew, יעל אקב, Demšar, Jure, yael....@gmail.com

 

Note though that that the 'dwi_legacy_cpu' approach is not part of the HCPpipelines, and doesn't include TOPUP for susceptibility distortion correction (which is quite important).

 

-- 

Michael Harms, Ph.D.

-----------------------------------------------------------

Professor of Psychiatry

Washington University School of Medicine

Department of Psychiatry, Box 8134

660 South Euclid Ave.                        Tel: 314-747-6173

St. Louis, MO  63110                          Email: mha...@wustl.edu

 

From: Jure Demsar <demsa...@gmail.com>
Date: Friday, August 29, 2025 at 12:37 AM
To: HCP-Users <hcp-...@humanconnectome.org>
Cc: "akn...@gmail.com" <akn...@gmail.com>, "Harms, Michael" <mha...@wustl.edu>, "Glasser, Matthew" <glas...@wustl.edu>, יעל אקב <yael...@gmail.com>, Jure Demsar <jure....@ff.uni-lj.si>, "yael....@gmail.com" <yael....@gmail.com>, "demsa...@gmail.com" <demsa...@gmail.com>
Subject: Re: [hcp-users] dMRI preprocessing

 

Hi,

 

While you are waiting for fixes on our end, you could try out our legacy DWI pipeline (https://qunex.readthedocs.io/en/latest/api/gmri/dwi_legacy_gpu.html) and see if the outputs there are OK for what you need. Once you have the outputs from that, you can do some test NODDI processing (https://qunex.readthedocs.io/en/latest/api/gmri/dwi_noddi_gpu.html).

 

Best, Jure

Jure Demsar

unread,
Aug 29, 2025, 3:48:44 PM (9 days ago) Aug 29
to HCP-Users, akn...@gmail.com, mha...@wustl.edu, glas...@wustl.edu, יעל אקב, Demšar, Jure, yael....@gmail.com, Jure Demsar
Hi,

While you are waiting for fixes on our end, you could try out our legacy DWI pipeline (https://qunex.readthedocs.io/en/latest/api/gmri/dwi_legacy_gpu.html) and see if the outputs there are OK for what you need. Once you have the outputs from that, you can do some test NODDI processing (https://qunex.readthedocs.io/en/latest/api/gmri/dwi_noddi_gpu.html).

Best, Jure

On Friday, 29 August 2025 at 07:17:47 UTC+2 akn...@gmail.com wrote:

Annchen Knodt

unread,
Aug 29, 2025, 5:53:52 PM (9 days ago) Aug 29
to Harms, Michael, hcp-...@humanconnectome.org, Glasser, Matthew, יעל אקב, Demšar, Jure, yael....@gmail.com, demsa...@gmail.com

Thanks so much for the help and feedback!

To answer your questions, no the
0223_DWI_dir139_AP.nii.gz does not contain any b0 volumes, so sounds like that would explain the error.  My first intuition when reorganizing the data into paired and unpaired volumes was to drop the b0 from the nifty with the dwi volumes such that everything in there was “unpaired”.

 

I just tried running it with ALL my AP volumes in the big nifti (b0 and b>0; I am calling this 0223_DWI_dir147_AP.nii.gz now since I have 8 b0s along with my 139 dwis), as well as an AP b0 and a PA b0 as in my last run.  So yes to further clarify I do have actual b0s in both AP and PA. 

 

For this testing out re-organizing, for the 0223_DWI_b0_AP.nii.gz and 0223_DWI_b0_PA.nii.gz files I am using the average of all the b0s in each direction, (mainly bc it was easy to just copy over from another pipeline I was running).  I did get further when I made this update (including b0s in the big nifti with the dwis), but then it failed during topup it seems (log attached).

 

I’m assuming that even with this having gotten further, the pipeline still requires updates, but let me know if I’m wrong and I will see if I can troubleshoot this and any subsequent issues with what I’ve learned now.  Thanks!

error_hcp_diffusion_0223_2025-08-29_17.29.37.015636.log

Harms, Michael

unread,
Aug 29, 2025, 6:12:32 PM (9 days ago) Aug 29
to hcp-...@humanconnectome.org, Glasser, Matthew, יעל אקב, Demšar, Jure, yael....@gmail.com, demsa...@gmail.com

 

Just supply all the scans that you have, without any external averaging or reorganization.  Note that each scan needs to contain at least 1 b=0 volume, because that's what the code uses to normalize for intensity differences across scans.

 

If you do that, and the inputs meet that condition, it should run to completion.

Annchen Knodt

unread,
Sep 2, 2025, 10:17:53 AM (5 days ago) Sep 2
to HCP-Users, mha...@wustl.edu, glas...@wustl.edu, יעל אקב, Demšar, Jure, yael....@gmail.com, demsa...@gmail.com
Ok great, I got it to run!  To recap:
  • My data were originally stored in 2 nii.gz files, one for each phase encoding direction: AP.nii.gz had 139 dwis and 8 b0s; PA.nii.gz had 3 b0s.  This did not work as inputs to hcp_diffusion bc there were no "paired" files
  • To get a configuration that worked, I created another AP_b0s.nii.gz by copying the first 3 b0s from AP.nii.gz - that gave me the paired b0s files i needed.  This did not work if i had only a single b0 in the paired b0s files, so it seems like it needs more than one.
Thanks again for your help!

Harms, Michael

unread,
Sep 2, 2025, 10:22:38 AM (5 days ago) Sep 2
to hcp-...@humanconnectome.org, Glasser, Matthew, יעל אקב, Demšar, Jure, yael....@gmail.com, demsa...@gmail.com

Glad you got it working.  But a single b=0 should (in principle) work fine.  What was the error when you tried that configuration?

Harms, Michael

unread,
Sep 2, 2025, 12:19:52 PM (5 days ago) Sep 2
to hcp-...@humanconnectome.org, Glasser, Matthew, יעל אקב, Demšar, Jure, yael....@gmail.com, demsa...@gmail.com

 

Just to clarify, I meant that a "single b=0" in each input file should be sufficient.  Did you test that particular configuration?

 

-- 

Michael Harms, Ph.D.

-----------------------------------------------------------

Professor of Psychiatry

Washington University School of Medicine

Department of Psychiatry, Box 8134

660 South Euclid Ave.                        Tel: 314-747-6173

St. Louis, MO  63110                          Email: mha...@wustl.edu

 

Annchen Knodt

unread,
Sep 3, 2025, 2:14:23 PM (4 days ago) Sep 3
to HCP-Users, mha...@wustl.edu, glas...@wustl.edu, יעל אקב, Demšar, Jure, yael....@gmail.com, demsa...@gmail.com
Yes, when I had tested it with just a single volume in each of the b0 files, I got this error:

Image Exception : #63 :: No image files match: /vwork/ark19/qunex/test_sessions/sessions/0223/hcp/0223/Diffusion/rawdata/Pos_2_b0_????.nii*

(further details in this previous post)

Note that for this the single volumes were averages of (1) the 8 AP and (2) the 3 PA b0s - i assume this wouldn't make any different logistically, but i have not tested it using a single raw b0 volume in each of the files.

Harms, Michael

unread,
Sep 3, 2025, 2:25:40 PM (4 days ago) Sep 3
to hcp-...@humanconnectome.org, Glasser, Matthew, יעל אקב, Demšar, Jure, yael....@gmail.com, demsa...@gmail.com

 

But that we determined was due to 0223_DWI_dir139_AP.nii.gz not containing any b=0 volumes at all, right?   Which is a different issue from only having a *single* b=0 available in each series.

Annchen Knodt

unread,
Sep 3, 2025, 2:45:34 PM (4 days ago) Sep 3
to HCP-Users, mha...@wustl.edu, glas...@wustl.edu, יעל אקב, Demšar, Jure, yael....@gmail.com, demsa...@gmail.com
Ah, oops, it was the next iteration then, where it failed instead during topup, as detailed in this post.

Harms, Michael

unread,
Sep 3, 2025, 3:24:39 PM (4 days ago) Sep 3
to hcp-...@humanconnectome.org, Glasser, Matthew, יעל אקב, Demšar, Jure, yael....@gmail.com, demsa...@gmail.com

Did you ever run a test with just your two original NIFTI files, which per your earlier statement each indeed had at least one b=0:

 

My data were originally stored in 2 nii.gz files, one for each phase encoding direction: AP.nii.gz had 139 dwis and 8 b0s; PA.nii.gz had 3 b0s.  This did not work as inputs to hcp_diffusion bc there were no "paired" files

 

BUT WITHOUT the use of `EMPTY` in the pos/negData input strings?

(Such an invocation should be possible within QuNex by just getting the inputs to `hcp_diffusion` set correctly.

 

Provided you used --combine-data-flag=2, I believe that should have worked, with no manipulation of the original NIFTI.

 

Meanwhile, we are working on fixing this within QuNex.

Annchen Knodt

unread,
Sep 3, 2025, 4:41:01 PM (4 days ago) Sep 3
to HCP-Users, mha...@wustl.edu, glas...@wustl.edu, יעל אקב, Demšar, Jure, yael....@gmail.com, demsa...@gmail.com
I had not tried that - probably i was missing something somewhere (or maybe bc i was running with run_turnkey?), but it didn't seem to matter what i put in the --hcp_dwi_pos/negdata arguments for hcp_diffusion (at one point i just put a random string and nothing changed).  I did have --combine-data-flag=2 the whole time.  if it'd be helpful for me to test this out lmk - happy to do so!

Harms, Michael

unread,
Sep 3, 2025, 4:50:26 PM (4 days ago) Sep 3
to Annchen Knodt, HCP-Users, Glasser, Matthew, יעל אקב, Demšar, Jure, yael....@gmail.com, demsa...@gmail.com

 

That should have worked, per our testing so far here on our end, but if you don't want to figure out how to make that happen through QuNex on your end, that's fine.

 

But I wouldn't actually use what I believe you've done, which, per my understanding, is to extract and average the b=0's outside of the pipeline code itself.

Harms, Michael

unread,
Sep 5, 2025, 10:07:26 AM (2 days ago) Sep 5
to hcp-...@humanconnectome.org, Glasser, Matthew, יעל אקב, Demšar, Jure, yael....@gmail.com, demsa...@gmail.com

 

Hi Annchen,

Give the very latest version of QuNex (1.3.3) a try.  It should work with your data without any special modifications on your part.  i.e., just import the two original files as you did previously. (I believe they were "10002_DWI_dir6_PA.nii.gz” and “10002_DWI_dir185_AP.nii.gz"?)  QuNex will detect that you have no "pairs" based on the file names, automatically switch to using --combine-data-flag=2 in the call to hcp_diffusion, and construct appropriate pos/negData inputs for that situation.

 

If that doesn't work, please let us know.

 

Cheers,

-MH

 

-- 

Michael Harms, Ph.D.

-----------------------------------------------------------

Professor of Psychiatry

Washington University School of Medicine

Department of Psychiatry, Box 8134

660 South Euclid Ave.                        Tel: 314-747-6173

St. Louis, MO  63110                          Email: mha...@wustl.edu

 

Reply all
Reply to author
Forward
0 new messages