Hi,
That snapshot is unreadable. Please attach the full log of the output (containing both call and error).
You need to run the pipeline in a mode that does not attempt to "pair up" the 185dirs and 6dirs scan.
But assuming that you have b=0 volumes collected in both scans, your acquisition can be accommodated in the processing.
Cheers,
-MH
--
Michael Harms, Ph.D.
-----------------------------------------------------------
Professor of Psychiatry
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO 63110 Email: mha...@wustl.edu
--
You received this message because you are subscribed to the Google Groups "HCP-Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
hcp-users+...@humanconnectome.org.
To view this discussion visit
https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/f8830c3c-33e6-40d0-b265-defd711bc258n%40humanconnectome.org.
The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail.
You don't have pairs of AP/PA volumes acquired with the same diffusion direction, so you need to use the mode "--combine-data-flag=2"
Also, you might want to consider using the --select-best-b0 option.
Cheers,
-MH
--
Michael Harms, Ph.D.
-----------------------------------------------------------
Professor of Psychiatry
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO 63110 Email: mha...@wustl.edu
From:
יעל אקב <yael...@gmail.com>
Date: Thursday, July 3, 2025 at 9:11 AM
To: "Harms, Michael" <mha...@wustl.edu>, Yael Shavit Coldham <yael....@gmail.com>
Cc: "hcp-...@humanconnectome.org" <hcp-...@humanconnectome.org>
Subject: Re: [hcp-users] dMRI preprocessing
Hi,
We've tried to run the hcp_diffusion command both with --hcp_nogpu flag and without it.
When running this commend (with GPU):
qunex_container hcp_diffusion -hcp_nogpu --sessions=“${SESSIONS}” --mappingfile=“${INPUT_MAPPING_FILE}” --container=“${QUNEX_CONTAINER}” --dockeropt=“-v ${BIND_FOLDER}:${BIND_FOLDER}” --sessionsfolder=“${STUDY_FOLDER}/sessions” --batchfile=“${STUDY_FOLDER}/processing/batch.txt” --hcp_dwi_negdata=“10002_DWI_dir6_PA.nii.gz” --hcp_dwi_posdata=“10002_DWI_dir185_AP.nii.gz”
When running this commend (no GPU):
qunex_container hcp_diffusion --sessions=“${SESSIONS}” --mappingfile=“${INPUT_MAPPING_FILE}” --container=“${QUNEX_CONTAINER}” --dockeropt=“-v ${BIND_FOLDER}:${BIND_FOLDER}” --sessionsfolder=“${STUDY_FOLDER}/sessions” --batchfile=“${STUDY_FOLDER}/processing/batch.txt” --hcp_dwi_negdata=“10001_DWI_dir6_PA.nii.gz” --hcp_dwi_posdata=“10001_DWI_dir185_AP.nii.gz”
The full logs output files are attached.
Thanks in advance!
בתאריך יום ה׳, 3 ביולי 2025 ב-16:52 מאת Harms, Michael <mha...@wustl.edu>:
We've been told that we might need AP and PA files with the same number of volumes in order for this to run successfully, but we have been able to run this in the past on a similar dataset with the same number of AP and PA volumes.
What could be the problem here?
Thanks in advance!--
You received this message because you are subscribed to the Google Groups "HCP-Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hcp-users+...@humanconnectome.org.
To view this discussion visit https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/f8830c3c-33e6-40d0-b265-defd711bc258n%40humanconnectome.org.
The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail.
Those are the flags to the HCP pipeline. You'll need to translate them into their QuNex equivalent.
--
Michael Harms, Ph.D.
-----------------------------------------------------------
Professor of Psychiatry
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO 63110 Email: mha...@wustl.edu
To view this discussion visit
https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/479E660D-49B0-4787-B0C6-3EF7A8C0A5E8%40wustl.edu.
When running this commend (with GPU):
qunex_container hcp_diffusion -hcp_nogpu --sessions=“${SESSIONS}” --mappingfile=“${INPUT_MAPPING_FILE}” --container=“${QUNEX_CONTAINER}” --dockeropt=“-v ${BIND_FOLDER}:${BIND_FOLDER}” --sessionsfolder=“${STUDY_FOLDER}/sessions” --batchfile=“${STUDY_FOLDER}/processing/batch.txt” --hcp_dwi_negdata=“10002_DWI_dir6_PA.nii.gz” --hcp_dwi_posdata=“10002_DWI_dir185_AP.nii.gz”
When running this commend (no GPU):
qunex_container hcp_diffusion --sessions=“${SESSIONS}” --mappingfile=“${INPUT_MAPPING_FILE}” --container=“${QUNEX_CONTAINER}” --dockeropt=“-v ${BIND_FOLDER}:${BIND_FOLDER}” --sessionsfolder=“${STUDY_FOLDER}/sessions” --batchfile=“${STUDY_FOLDER}/processing/batch.txt” --hcp_dwi_negdata=“10001_DWI_dir6_PA.nii.gz” --hcp_dwi_posdata=“10001_DWI_dir185_AP.nii.gz”
The full logs output files are attached.
Thanks in advance!
We should support having only b0s paired, as this is not that uncommon.
Matt.
From: Jure Demsar <demsa...@gmail.com>
Reply-To: "hcp-...@humanconnectome.org" <hcp-...@humanconnectome.org>
Date: Monday, July 7, 2025 at 6:10 AM
To: HCP-Users <hcp-...@humanconnectome.org>
Cc: יעל אקב <yael...@gmail.com>, "hcp-...@humanconnectome.org" <hcp-...@humanconnectome.org>, "Harms, Michael" <mha...@wustl.edu>, "yael....@gmail.com" <yael....@gmail.com>
Subject: Re: [hcp-users] dMRI preprocessing
Hi Mike,
We've been told that we might need AP and PA files with the same number of volumes in order for this to run successfully, but we have been able to run this in the past on a similar dataset with the same number of AP and PA volumes.
What could be the problem here?
Thanks in advance!--
You received this message because you are subscribed to the Google Groups "HCP-Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hcp-users+...@humanconnectome.org.
To view this discussion visit https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/f8830c3c-33e6-40d0-b265-defd711bc258n%40humanconnectome.org.
The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail.
--
You received this message because you are subscribed to the Google Groups "HCP-Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
hcp-users+...@humanconnectome.org.
Please first confirm whether the problem exists when using an up-to-date version of the QuNex container. Some of the diffusion-related HCPpipelines code was fixed back in May 2024.
thx
--
Michael Harms, Ph.D.
-----------------------------------------------------------
Professor of Psychiatry
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO 63110 Email: mha...@wustl.edu
Perhaps this is an issue:
--posData="/home/docker/volumes/hcppipelines/yael_practice/PTSD_BB_study/sessions/10001/hcp/10001/unprocessed/Diffusion/10001_DWI_dir185_AP.nii.gz@EMPTY" \
If instead it was supposed to not have the EMPTYs.
Matt.
No. That's the correct syntax. The use of "EMPTY" at different positions in the pos/neg data inputs is precisely how we tell the code that the inputs in the same position are not to be paired.
Yes, I believe that the 'EMPTY' entries are correct.
Please try running with the latest public version of the QuNex container.
thx
--
Michael Harms, Ph.D.
-----------------------------------------------------------
Professor of Psychiatry
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO 63110 Email: mha...@wustl.edu
We've been told that we might need AP and PA files with the same number of volumes in order for this to run successfully, but we have been able to run this in the past on a similar dataset with the same number of AP and PA volumes.
What could be the problem here?
Thanks in advance!--
You received this message because you are subscribed to the Google Groups "HCP-Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hcp-users+...@humanconnectome.org.
To view this discussion visit https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/f8830c3c-33e6-40d0-b265-defd711bc258n%40humanconnectome.org.
The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail.
--
You received this message because you are subscribed to the Google Groups "HCP-Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hcp-users+...@humanconnectome.org.
To view this discussion visit https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/d69f0945-923e-436d-b3a8-c1edac32e0abn%40humanconnectome.org.
The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail.
Actually, looking at the code more closely, while *conceptually* use of "EMPTY" is how your situation should be handled (since you don't have a pair of dir185 scans with AP and PA polarity with the same set of diffusion directions), to get the current code to work, it may be necessary to pretend that your dir185_AP and dir6_PA scans are indeed a "pair", and eliminate the use of "EMPTY" in the posData and negData inputs.
If you are launching this using hcp_diffusion in QuNex, you may need to override the default QuNex behavior and explicitly set:
--hcp_dwi_posdata=10001_DWI_dir185_AP.nii.gz
--hcp_dwi_negdata=10001_DWI_dir6_PA.nii.gz
so that you don't get the "EMPTY"s included.
Give that a try. If that doesn't work, there may still be a separate issue where you need a newer version of the container to pick up changes to the HCPpipelines code from 2024.
Cheers,
-MH
--
Michael Harms, Ph.D.
-----------------------------------------------------------
Professor of Psychiatry
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO 63110 Email: mha...@wustl.edu
--hcp_dwi_posdata=10001_DWI_dir185_AP.nii.gz
--hcp_dwi_negdata=10001_DWI_dir6_PA.nii.gz
Hi,
We attempted to run the hcp_diffusion
module again using the updated QuNex version, but unfortunately, the issue persists.
Below is the command we used:
I’ve attached the log file showing the error. Any guidance would be appreciated!
Best regards,
Yael
This is unrelated to your CUDA driver issue, but just wanted to call your attention to the set of WARNING messages earlier in the log:
Mon Jul 14 14:24:25 UTC 2025:DiffPreprocPipeline.sh: WARNING: Using --select-best-b0 prepends the best b0 to the start of the file passed into eddy.
Mon Jul 14 14:24:26 UTC 2025:DiffPreprocPipeline.sh: WARNING: To ensure eddy succesfully aligns this new first b0 with the actual first volume,
Mon Jul 14 14:24:26 UTC 2025:DiffPreprocPipeline.sh: WARNING: we recommend to increase the FWHM for the first eddy iterations if using --select-best-b0
Mon Jul 14 14:24:26 UTC 2025:DiffPreprocPipeline.sh: WARNING: This can be done by setting the --extra_eddy_args=--fwhm=... flag
We typically use:
--extra-eddy-arg=--niter=8
--extra-eddy-arg=--fwhm=10,8,6,4,2,0,0,0
Cheers,
-MH
--
Michael Harms, Ph.D.
-----------------------------------------------------------
Professor of Psychiatry
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO 63110 Email: mha...@wustl.edu
From:
Jure Demsar <demsa...@gmail.com>
Date: Tuesday, July 15, 2025 at 2:49 AM
To: HCP-Users <hcp-...@humanconnectome.org>
Cc: יעל אקב <yael...@gmail.com>, HCP-Users <hcp-...@humanconnectome.org>, "Harms, Michael" <mha...@wustl.edu>, "Glasser, Matthew" <glas...@wustl.edu>, Jure Demsar <jure....@ff.uni-lj.si>, "yael....@gmail.com" <yael....@gmail.com>
Subject: Re: [hcp-users] dMRI preprocessing
Hi,
the error is quite verbose:
EDDY::EddyCudaHelperFunctions::InitGpu: cudaGetDevice returned an error: cudaError_t = 35, cudaErrorName = cudaErrorInsufficientDriver, cudaErrorString = CUDA driver version is insufficient for CUDA runtime version
EDDY::cuda/EddyCudaHelperFunctions.cu::: static void EDDY::EddyCudaHelperFunctions::InitGpu(bool): Exception thrown
EDDY::cuda/EddyGpuUtils.cu::: static std::shared_ptr<EDDY::DWIPredictionMaker> EDDY::EddyGpuUtils::LoadPredictionMaker(const EDDY::EddyCommandLineOptions&, EDDY::ScanType, const EDDY::ECScanManager&, unsigned int, float, NEWIMAGE::volume<float>&, bool): Exception
thrown
EDDY::eddy.cpp::: EDDY::ReplacementManager* EDDY::Register(const EDDY::EddyCommandLineOptions&, EDDY::ScanType, unsigned int, const std::vector<float, std::allocator<float> >&, EDDY::SecondLevelECModelType, bool, EDDY::ECScanManager&, EDDY::ReplacementManager*,
NEWMAT::Matrix&, NEWMAT::Matrix&): Exception thrown
EDDY::: Eddy failed with message EDDY::eddy.cpp::: EDDY::ReplacementManager* EDDY::DoVolumeToVolumeRegistration(const EDDY::EddyCommandLineOptions&, EDDY::ECScanManager&): Exception thrown
This is an issue with your system, you need to update the CUDA drivers to a version that supports CUDA 12.3 or newer.
Best, Jure
On Tuesday, 15 July 2025 at 09:37:38 UTC+2 יעל אקב wrote:
Hi,
We attempted to run the
hcp_diffusion
module again using the updated QuNex version, but unfortunately, the issue persists.Below is the command we used:
I don’t see anything in that error log file.
Matt.
Please see my email from July 8th in this thread, pasted here for convenience:
Actually, looking at the code more closely, while *conceptually* use of "EMPTY" is how your situation should be handled (since you don't have a pair of dir185 scans with AP and PA polarity with the same set of diffusion directions), to get the current code to work, it may be necessary to pretend that your dir185_AP and dir6_PA scans are indeed a "pair", and eliminate the use of "EMPTY" in the posData and negData inputs.
If you are launching this using hcp_diffusion in QuNex, you may need to override the default QuNex behavior and explicitly set:
--hcp_dwi_posdata=10001_DWI_dir185_AP.nii.gz
--hcp_dwi_negdata=10001_DWI_dir6_PA.nii.gz
so that you don't get the "EMPTY"s included.
Cheers,
-MH
--
Michael Harms, Ph.D.
-----------------------------------------------------------
Professor of Psychiatry
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO 63110 Email: mha...@wustl.edu
Error! Filename not specified.
Is this something that QuNex could handle properly by default or is this an HCP Pipelines bug?
Matt.
If you look at the actual call to DiffPreProcPipeline.sh (at the very top of the error log), it needs to not contain "EMPTY" in the --negData and --posData strings.
I'm not sure how to make that happen in QuNex in the context of how you are trying to launch this (e.g., via run_turnkey).
Did you try specifying the actual name of the files? E.g.,
--hcp_dwi_posdata="0223_DWI_dir139_AP.nii.gz" \
--hcp_dwi_negdata="0223_DWI_dir0_PA.nii.gz" \
I think that might work, although it would specific obviously to just that one subject.
As a completely unrelated aside, note that if you don't run the structural preprocessing steps first (or unless you already have), you will eventually get an error in the "PostEddy" stage, where the code brings the dMRI data into the T1w space for the subject.
2.5mm is really low resolution. We run that on babies who can’t hold still in 20-30s to get some kind of diffusion in their clinical MRI scans.
Partial volume correction, which we already need to work on for diffusion measures, will really loom large here for cortical analyses.
The overall usecase: DWIs in one phase and b0s in both phases should be a supported usecase. We are having an internal discussion on how to ensure that.
Matt.
From:
Annchen Knodt <akn...@gmail.com>
Date: Thursday, August 28, 2025 at 1:46 PM
To: HCP-Users <hcp-...@humanconnectome.org>
Cc: "Harms, Michael" <mha...@wustl.edu>, "Glasser, Matthew" <glas...@wustl.edu>, יעל אקב <yael...@gmail.com>, "Demšar, Jure" <jure....@ff.uni-lj.si>, "yael....@gmail.com" <yael....@gmail.com>, "demsa...@gmail.com" <demsa...@gmail.com>, Annchen
Knodt <akn...@gmail.com>
Subject: Re: [hcp-users] dMRI preprocessing
I've now tried specifying the entire name of the file for the --hcp_dwi_*data args, as well as the full filepath, and it didn't change anything. I realized that
maybe it doesn't even matter what i put there, and sure enough just a completely random string didn't change anything either. Then it occurred to me that I could try and reformat my dwi data so that i have one file for each of the b0 pairs (AP&PA) and then
a separate file for all the unpaired diffusion-weighted volumes (AP).
WIth this, the lines in my hcp_mapping file look like:
dwi acq-APdir139 => DWI:dir139_AP
dwi acq-APb0 => DWI:b0_AP
dwi acq-PAb0 => DWI:b0_PA
FWIW I put just one b0 volume in the b0 files for testing. Sure enough, this gets me past the "No pairs of phase encoding directions have been found!" errors,
but then I fail pretty quickly again with this message:
Image Exception : #63 :: No image files match: /vwork/ark19/qunex/test_sessions/sessions/0223/hcp/0223/Diffusion/rawdata/Pos_2_b0_????.nii*
Full log attached. I will poke around in qunex and experiment with the way I am setting up the call but would love any feedback you're able to provide here as well.
And thanks for the reminder that i need to make sure qunex can see the output from the structural preprocessing i've already run (presumably that's still not my issue, as I haven't gotten past PreEddy).
As another unrelated aside, the resolution on these data is 2.5mm and i've been wondering if it's even appropriate to be attempting this given these pipelines were originally written for the 1.25mm HCP data. My primary end goal is getting cortical NODDI estimates
a la this 2018 Neuroimage paper, and my understanding is that even though our resolution isn't ideal for cortex, we're likely still better off ultimately mapping to the surface
rather than staying in the volume - happy for feedback if i'm way off on this.
Hmm, Pos_2_b0*nii should be a NIFTI containing the b=0 volumes in the 2nd file listed in --posData, which would be 0223_DWI_dir139_AP.nii.gz in the latest attempt.
Does that NIFTI actually contain any b=0 volumes?
--
Michael Harms, Ph.D.
-----------------------------------------------------------
Professor of Psychiatry
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO 63110 Email: mha...@wustl.edu
From:
Annchen Knodt <akn...@gmail.com>
Date: Thursday, August 28, 2025 at 1:46 PM
To: HCP-Users <hcp-...@humanconnectome.org>
Cc: "Harms, Michael" <mha...@wustl.edu>, "Glasser, Matthew" <glas...@wustl.edu>, יעל אקב <yael...@gmail.com>, Jure Demsar <jure....@ff.uni-lj.si>, "yael....@gmail.com" <yael....@gmail.com>, "demsa...@gmail.com" <demsa...@gmail.com>, Annchen
Knodt <akn...@gmail.com>
Subject: Re: [hcp-users] dMRI preprocessing
I've now tried specifying the entire name of the file for the --hcp_dwi_*data args, as well as the full filepath, and it didn't change anything. I realized that
maybe it doesn't even matter what i put there, and sure enough just a completely random string didn't change anything either. Then it occurred to me that I could try and reformat my dwi data so that i have one file for each of the b0 pairs (AP&PA) and then
a separate file for all the unpaired diffusion-weighted volumes (AP).
WIth this, the lines in my hcp_mapping file look like:
dwi acq-APdir139 => DWI:dir139_AP
dwi acq-APb0 => DWI:b0_AP
dwi acq-PAb0 => DWI:b0_PA
FWIW I put just one b0 volume in the b0 files for testing. Sure enough, this gets me past the "No pairs of phase encoding directions have been found!" errors,
but then I fail pretty quickly again with this message:
Image Exception : #63 :: No image files match: /vwork/ark19/qunex/test_sessions/sessions/0223/hcp/0223/Diffusion/rawdata/Pos_2_b0_????.nii*
Full log attached. I will poke around in qunex and experiment with the way I am setting up the call but would love any feedback you're able to provide here as well.
And thanks for the reminder that i need to make sure qunex can see the output from the structural preprocessing i've already run (presumably that's still not my issue, as I haven't gotten past PreEddy).
As another unrelated aside, the resolution on these data is 2.5mm and i've been wondering if it's even appropriate to be attempting this given these pipelines were originally written for the 1.25mm HCP data. My primary end goal is getting cortical NODDI estimates
a la this 2018 Neuroimage paper, and my understanding is that even though our resolution isn't ideal for cortex, we're likely still better off ultimately mapping to the surface
rather than staying in the volume - happy for feedback if i'm way off on this.
On Thursday, August 28, 2025 at 11:22:03 AM UTC-4 mha...@wustl.edu wrote:
Also, do you actually have a b=0 acquired with "AP" polarity, for the 0223_DWI_b0_AP.nii.gz file that you created? If not, while you might be able to get the code to not error out, the result won't be valid, because it would incorrectly be treating a b=0 volume acquired with PA polarity as if it was acquired as AP polarity.
Also, as Matt mentioned, we are scoping out a proper fix for this, if you just want to wait a couple weeks…
--
Michael Harms, Ph.D.
-----------------------------------------------------------
Professor of Psychiatry
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO 63110 Email: mha...@wustl.edu
To view this discussion visit
https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/A04F94AE-81A7-4EDC-BB6C-DDA2A4E424BF%40wustl.edu.
Note though that that the 'dwi_legacy_cpu' approach is not part of the HCPpipelines, and doesn't include TOPUP for susceptibility distortion correction (which is quite important).
--
Michael Harms, Ph.D.
-----------------------------------------------------------
Professor of Psychiatry
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO 63110 Email: mha...@wustl.edu
From:
Jure Demsar <demsa...@gmail.com>
Date: Friday, August 29, 2025 at 12:37 AM
To: HCP-Users <hcp-...@humanconnectome.org>
Cc: "akn...@gmail.com" <akn...@gmail.com>, "Harms, Michael" <mha...@wustl.edu>, "Glasser, Matthew" <glas...@wustl.edu>, יעל אקב <yael...@gmail.com>, Jure Demsar <jure....@ff.uni-lj.si>, "yael....@gmail.com" <yael....@gmail.com>, "demsa...@gmail.com"
<demsa...@gmail.com>
Subject: Re: [hcp-users] dMRI preprocessing
Hi,
While you are waiting for fixes on our end, you could try out our legacy DWI pipeline (https://qunex.readthedocs.io/en/latest/api/gmri/dwi_legacy_gpu.html) and see if the outputs there are OK for what you need. Once you have the outputs from that, you can do some test NODDI processing (https://qunex.readthedocs.io/en/latest/api/gmri/dwi_noddi_gpu.html).
Best, Jure
Thanks so much for the help and feedback!
To answer your questions, no the 0223_DWI_dir139_AP.nii.gz does not contain any b0 volumes, so sounds like that would explain the error. My first intuition when reorganizing the data into
paired and unpaired volumes was to drop the b0 from the nifty with the dwi volumes such that everything in there was “unpaired”.
I just tried running it with ALL my AP volumes in the big nifti (b0 and b>0; I am calling this 0223_DWI_dir147_AP.nii.gz now since I have 8 b0s along with my 139 dwis), as well as an AP b0 and a PA b0 as in my last run. So yes to further clarify I do have actual b0s in both AP and PA.
For this testing out re-organizing, for the 0223_DWI_b0_AP.nii.gz and 0223_DWI_b0_PA.nii.gz files I am using the average of all the b0s in each direction, (mainly bc it was easy to just copy over from another pipeline I was running). I did get further when I made this update (including b0s in the big nifti with the dwis), but then it failed during topup it seems (log attached).
I’m assuming that even with this having gotten further, the pipeline still requires updates, but let me know if I’m wrong and I will see if I can troubleshoot this and any subsequent issues with what I’ve learned now. Thanks!
Just supply all the scans that you have, without any external averaging or reorganization. Note that each scan needs to contain at least 1 b=0 volume, because that's what the code uses to normalize for intensity differences across scans.
If you do that, and the inputs meet that condition, it should run to completion.
Glad you got it working. But a single b=0 should (in principle) work fine. What was the error when you tried that configuration?
Just to clarify, I meant that a "single b=0" in each input file should be sufficient. Did you test that particular configuration?
--
Michael Harms, Ph.D.
-----------------------------------------------------------
Professor of Psychiatry
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO 63110 Email: mha...@wustl.edu
To view this discussion visit
https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/B2D36E27-3773-4A1C-88F2-27F1DFF4A691%40wustl.edu.
Image Exception : #63 :: No image files match: /vwork/ark19/qunex/test_sessions/sessions/0223/hcp/0223/Diffusion/rawdata/Pos_2_b0_????.nii*
(further details in this previous post)
Note that for this the single volumes were averages of (1) the 8 AP and (2) the 3 PA b0s - i assume this wouldn't make any different logistically, but i have not tested it using a single raw b0 volume in each of the files.
But that we determined was due to 0223_DWI_dir139_AP.nii.gz not containing any b=0 volumes at all, right? Which is a different issue from only having a *single* b=0 available in each series.
Did you ever run a test with just your two original NIFTI files, which per your earlier statement each indeed had at least one b=0:
My data were originally stored in 2 nii.gz files, one for each phase encoding direction: AP.nii.gz had 139 dwis and 8 b0s; PA.nii.gz had 3 b0s. This did not work as inputs to hcp_diffusion bc there were no "paired" files
BUT WITHOUT the use of `EMPTY` in the pos/negData input strings?
(Such an invocation should be possible within QuNex by just getting the inputs to `hcp_diffusion` set correctly.
Provided you used --combine-data-flag=2, I believe that should have worked, with no manipulation of the original NIFTI.
Meanwhile, we are working on fixing this within QuNex.
That should have worked, per our testing so far here on our end, but if you don't want to figure out how to make that happen through QuNex on your end, that's fine.
But I wouldn't actually use what I believe you've done, which, per my understanding, is to extract and average the b=0's outside of the pipeline code itself.
Hi Annchen,
Give the very latest version of QuNex (1.3.3) a try. It should work with your data without any special modifications on your part. i.e., just import the two original files as you did previously. (I believe they were "10002_DWI_dir6_PA.nii.gz” and “10002_DWI_dir185_AP.nii.gz"?) QuNex will detect that you have no "pairs" based on the file names, automatically switch to using --combine-data-flag=2 in the call to hcp_diffusion, and construct appropriate pos/negData inputs for that situation.
If that doesn't work, please let us know.
Cheers,
-MH
--
Michael Harms, Ph.D.
-----------------------------------------------------------
Professor of Psychiatry
Washington University School of Medicine
Department of Psychiatry, Box 8134
660 South Euclid Ave. Tel: 314-747-6173
St. Louis, MO 63110 Email: mha...@wustl.edu
To view this discussion visit
https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/CCA9210A-9C1E-4850-AE6C-C40AA32E23E2%40wustl.edu.