Re: quantification issues

62 views
Skip to first unread message

Timothy Ellmore

unread,
Oct 19, 2020, 10:04:21 AM10/19/20
to Jan Petr, Henk-Jan Mutsaerts, Explo...@googlegroups.com
Hi Jan and Henk,

I think this is correct, and as you mentioned, Jan, the first two images seem to really be M0 images so before I read your last email I did another iteration where I just used:

x.M0PositionInASL4D = '[1 2];

I didn’t know about setting the variable x.M0=’separate_scan’ but when it finishes and prints the output of the struct I see that separate scan is indicated in the struct.

You may remember that we were wondering about the scalefactor and whether it needed to be adjusted when I was erroneously using ‘UseControlAsM0’ and ‘M0PositionInASL4D’ together. So for this iteration where I used only x.M0PositionInASL4D I set the scalefactor to be the default x.M0_GMScaleFactor=1.0 and the output looks quite reasonable with values in cortex between 40 to 60. Attached is a transverse ASLCheck for subject 1, timepoint 1 and a screenshot of what the N=40 population map looks like.

So, I think this resolves things. Thanks for all the helpful input on this!

Copying to the user group.

Tim




On Oct 18, 2020, at 2:56 PM, Jan Petr <j.p...@hzdr.de> wrote:

Tim,

sorry for all the replies - let us know if this is not too confusing.

Henk, Tim has Background suppression turned on, so you should not use 'UseControlAsM0' (we're working on calibration from control with BSup, but it is not yet read). The first two images seem to really be M0 images acquired in the same sequence (they have WM values around 1500, while the control images with BSup have values around 500 in the top slices).

All taken together

x.M0 = 'separate_scan'
x.M0PositionInASL4D = '[1 2]';

should be your correct settings.

Do you agree Henk?

regards,
Jan

Yes this makes sense. Your first one or two volumes are probably dummy scans, as this is a PASL sequence built on top of the usual Siemens fMRI acquisition?
This is why there are parameters that you would use for fMRI but don’t need for ASL, including the dummy scans, as this is to avoid the incomplete T1 saturation issue at the start of fMRI, which you don’t have with ASL as almost all signal is coming from the fresh blood (no saturation history).

So options 2 and 3 will essentially be the same, though option 3 will be less accurate (falsely including the dummy scans as if they were ASL volumes)
Option 1 uses the dummy scans as M0, which they are not.

Does that seem correct? BTW: Jan is on holiday coming week, but will still read his emails, perhaps we can give him a week break and continue afterwards ;)

On 16 Oct 2020, at 15:32, Timothy Ellmore <ellm...@gmail.com> wrote:

Hi Henk,

Yes this makes sense. Thanks for clarifying! I forgot that I should probably send this to the ExploreASL group.

I did some tests on a single subject dataset to see what happens when I choose combinations (either or both) of these variables and holding all other variables constant. 

Here's what I found:

Test 1)

%x.M0 = 'UseControlAsM0';
x.M0PositionInASL4D = '[1 2]';

The output M0.nii contains the first two volumes from ASL4D. 
There is no M0_backup.nii written to disk.

Test 2)

x.M0 = 'UseControlAsM0';
%x.M0PositionInASL4D = '[1 2]';

The output M0.nii is a single volume, which appears to be a mean volume.
There is no M0_backup.nii written to disk.

Test 3)

x.M0 = 'UseControlAsM0';
x.M0PositionInASL4D = '[1 2]';

The output M0.nii is a single volume, which appears to be a mean volume like in test 2) above.
There is a M0_backup.nii written to disk which contains the first two volumes from ASL4D.

Inspection of CBF.nii volumes output from each test above shows the most similarity between outputs from Test 2) and Test 3) while the values for Test 1) are much lower (e.g., CBF value in a lateral posterior cortical voxel is 23 in Test 1 versus 93 in Test 3).


Tim

On Oct 15, 2020, at 1:59 PM, Henk-Jan Mutsaerts <henkjanm...@gmail.com> wrote:

Yes they are mutually exclusive, although I agree with you that we can clarify this.
The pipeline follows the following order:

1) if x.M0PositionInASL4D is filled in, it will extract the volumes on these positions in ASL4D to a separate M0 NIfTI. So you use this option if the first two images need to be used as M0. If already an external M0 exists, the existing one is copied toM0_backup and M0.nii is overwritten.
2) x.M0 specifies what M0 option to use. If you have specified x.M0 = UseControlAsM0’, it will extract the average control volume and save this as separate NIfTI. Again, if already a M0 exists (in your case perhaps from the extraction of the first 2 volumes), it will be saved as M0_backup and M0.nii is overwritten.

So while they may be mutually exclusive, you may trick the pipeline in this way, if you have a situation where you want to discard the first two volumes and use the average control image as M0 (because of the absence of background suppression).
Does this make sense?

Cheers

@Jan: shall we copy this to the ExploreASL groups?

On 15 Oct 2020, at 17:38, Timothy Ellmore <ellm...@gmail.com> wrote:

Hi Jan,

I’ve been working through the questions you raised and after looking around I am still confused about what is actually happening based on how I configured my processing regarding 3) below:

In my DaraPar.m file I have listed in this order:

x.M0 = 'UseControlAsM0';
x.M0PositionInASL4D = '[1 2]';

Do I need or should I list both? Based on the comments below, the options seem mutually exclusive with x.M0 = 'UseControlAsM0'; implying all controls in the pairs will be averaged to make a mean control image as M0.nii while x.M0PositionInASL4D = '[1 2]'; implying that only the first pair will be used. 

After processing, when I look inside the ASL_1 directory I see a M0.nii and also a M0_backup.nii. The M0_backup.nii appears to be the first two volumes extracted (the first control-label pair) while the M0.nii looks like a mean image with values much less than in M0_backup.nii. So I guess my question is the way I have this configured is it using a mean of all pairs (x.M0 = 'UseControlAsM0’;) instead of just using the first pair (x.M0PositionInASL4D = '[1 2]’;) where no background suppression was used. 


% x.M0 - choose which M0 option to use (REQUIRED)
%      - options:
%        - 'separate_scan' = for a separate M0 NIfTI (needs to be in the same folder called M0.nii)
%        - 3.7394*10^6 = single M0 value to use 
%        - 'UseControlAsM0' = will copy the mean control image as M0.nii and process
%                             as if it was a separately acquired M0 image (taking TR etc from the
%                             ASL4D.nii). Make sure that no background suppression was used, otherwise
%                             this option is invalid


% x.M0PositionInASL4D - indicates the position of M0 in TimeSeries, if it is integrated by the vendor in the 
%                       DICOM export. Will move this from ASL4D.nii to M0.nii(OPTIONAL, DEFAULT = no M0 in timeseries) 
%                     - example for Philips 3D GRASE = '[1 2]' % (first control-label pair)
%                     - example for Siemens 3D GRASE = 1 % first image
%                     - example for GE 3D spiral = 2 % where first image is PWI & last = M0

Thanks for any insight!

Tim
 

On Sep 23, 2020, at 12:54 PM, Jan Petr <j.p...@hzdr.de> wrote:

Dear Tim,

Thank you for all the information - with all that, I could have made a complete picture of what you have. I'll list my observations + actions to be taken point by point.

Henk, can you quickly check and comment and look for 'Henk' to answer some specific questions please.


1) The slice readout time. This is calculated as (minTR - postLabelingDelay - labeling duration)/numberOfSlices
All those were in the exam card so I calculated:
(3953-1650-1600)/16 = 43.94 ms
Simply use this number in the the Data_par.m settings for the whole study ( I did that in the file attached). This could change some of the inferior-superior CBF gradient we discussed you saw.

2) It is indeed a 2D_EPI PCASL, but the PLD and labeling duration was set incorrectly in the Data_par.m - I have fixed that for you.

3) Background suppression is set to 'auto', so I can't see the timings. But that's not a problem since the data has an M0 scan. The first 2 dynamics out of 60 are M0 images without background suppression. And the remaining 58 dynamics are control/label imaging *with* background suppression.
In the Dara_par.json - you can set the Backgroundsuppression to 2, which is the default for Philips (you had that one correctly).

You can use the first two control images (as they don't have background suppression) for the M0-scan calibration, but not the rest of them.
This seems to be correctly configured, but I would need Henk to confirm this (not familiar with that option). Henk - are the first two images used for M0 and only the rest for ASL?
x.M0 = 'UseControlAsM0';
x.M0PositionInASL4D = '[1 2]';

I wanted to check if the M0 have different TR or not, but all DICOMs have TR of 4550 ms, which differs from what is in the exam card (4000 ms) - any idea Tim?

4) There are two technical details. The sequence was acquired with Philips software version 5.3 with strong SPIR fat suppression. This version of the scanner software had incorrectly set frequence shift of the fat suppression resulting in an artifact in the middle of the brain. This artifact is too minor to be visible on the raw ASL images. But after subtraction, this can be a rather annoying thing. We are working on getting rid of that, but it is a pain in the neck. Henk - can you confirm that this is the wrong scanner software version? The good thing is that the standard deviation of the signal over time looks perfectly normal. So you hopefully don't have this issue.

5) Another detail is that the acquisition resolution was "2.73 / 2.73 / 6.00"; but reconstructed to  "1.88 / 1.88 / 6.00";
This is not a problem at all. But the acquisition resolution should be taken into account for when evaluating GM perfusion.
Henk, we have defined this newly in BIDS, but do we have a parameter to specify this in the Data_Par.json so that this would be taken into account? Or is this something we still need to program?

6) I have checked the JSON and NIFTI files and it all looks in principle good, the Philips scalings are all read. What's troubling though is that the contrast in the first M0s is around 3 times higher than in the rest of the images. Even when reading the data slice by slice and ASL and M0 separately, it doesn't seem that the M0 and ASL images would be saved with a different scaling. I haven't really seen this in Philips yet, but it indeed looks as if there was an scaling of 3 on the initial M0 compared to ASL. You have mentioned that after using a factor of 10, you got CBF of around 20. So a factor of 3 (which is plausible from the data) would give CBF of around 60 which is very reasonable. But we of course have to find out if that's a general value to be used. Our previous discussion with Kim about version 5.3.1.3 led us to the conclusion that this can behave a bit unpredictably, but we couldn't figure out what's wrong.

Anyway - I have run the processing and came to the similar conclusion. CBF values ~200, 3 times higher than expected. Which corresponds well with the M0-control difference.

I will try to contact the Philips people again and discuss with them. And we can try to run our new BSup correction to see if the M0-control difference is exactly 3 or not.

Tim, can you try to ask your MR technologist what the background suppression pulse timing is - maybe if he clicks on the 'auto', it will display the correct timing values. Though we can probably live without that as well.

Lets try to work on the issues above and talk again next week.

best,
Jan

Thank you. Nice to meet you all. Sorry for my internet connection issues. I got bumped out twice today because they were doing work in my neighborhood on the network. Hopefully things will be back to normal next week.

I will package up the two timepoints from a single subject and send it to you to have a look.

Best wishes,

Tim


On Sep 18, 2020, at 11:43 AM, Jan Petr <j.p...@hzdr.de> wrote:

Dear Tim,

It was a pleasure to talk to you.

I see that all the Philips scalings are in the JSON - that is good because that means that they were not anonymized in the import. It seems that you have not used the ExploreASL import, but rather a dcm2nii from 2018 - this should in theory be fine and your DataParameters.m looks mostly good as well.

Possible issues are:

1) Regarding the SliceReadoutTime - I'll try to calculate it next week, but we might need to get in contact with your MR technician to get some more information - I'll let you know what is needed.

2) Henk - did we see BSup or not in the data? Because BSup is set to two pulses, but Controls are used as M0. Was there a separate M0 scan acquired? We can provide solution for all posibilities - separate M0/BSup with control as M0/control as M0 without BSup - but we need to know how was it acquired. Tim - you said that you have the exam card - that will be written there including the BSup timings.

3) If the issues above won't help with the scaling, we might want to have a look at the data. Tim, how difficult would be to send data for a single patient - either Nifti+JSON or even better the DICOMs. We would use it only to check the quantification and then delete it. We of course understand that this is not possible at all for some studies, I am just asking because it could speed up things.

best,
Jan




<DataParTemplate_JP.m>






Henk-Jan Mutsaerts

unread,
Oct 19, 2020, 2:44:31 PM10/19/20
to Timothy Ellmore, Jan Petr, Explo...@googlegroups.com
Great, yes x.M0_GMScaleFactor=1.0 effectively disables any additional M0_GM scaling.
This parameter is just for any custom scaling, which is your case isn’t necessary.

BW, Henk

On 19 Oct 2020, at 16:04, Timothy Ellmore <ellm...@gmail.com> wrote:

Hi Jan and Henk,

I think this is correct, and as you mentioned, Jan, the first two images seem to really be M0 images so before I read your last email I did another iteration where I just used:

x.M0PositionInASL4D = '[1 2];

I didn’t know about setting the variable x.M0=’separate_scan’ but when it finishes and prints the output of the struct I see that separate scan is indicated in the struct.

You may remember that we were wondering about the scalefactor and whether it needed to be adjusted when I was erroneously using ‘UseControlAsM0’ and ‘M0PositionInASL4D’ together. So for this iteration where I used only x.M0PositionInASL4D I set the scalefactor to be the default x.M0_GMScaleFactor=1.0 and the output looks quite reasonable with values in cortex between 40 to 60. Attached is a transverse ASLCheck for subject 1, timepoint 1 and a screenshot of what the N=40 population map looks like.

So, I think this resolves things. Thanks for all the helpful input on this!

Copying to the user group.

Tim

<Tra_qCBF_01_1_ASL_1.jpg>


<Screen Shot 2020-10-19 at 10.00.36 AM.png>

Jan Petr

unread,
Oct 19, 2020, 2:58:48 PM10/19/20
to Timothy Ellmore, Henk-Jan Mutsaerts, Explo...@googlegroups.com
Hi Tim,

x.M0=’separate_scan’ is probably the default value in case you omit to specify it. So this works, but better put it there explicitly.

Yes, I do remember - sorry I didn't get back to you on that. So i am super happy this seems to have solved that issue. And it is rather logical, because with Philips we have indeed never needed to set the M0_GMScaleFactor if the DICOMs were OK.

Just to be sure - I have tried it again with the one dataset from you and you are right, the values  are good. And the 40 image average looks great as well.

So I guess, you are fine right now to continue with the analysis - let us know if there are more issues. Or if you need help interpreting. You might want to check the individual cases as well for artifacts, the example from you was slightly high on vascular artifacts.

regards,
Jan

Timothy Ellmore

unread,
Oct 20, 2020, 9:16:34 AM10/20/20
to ExploreASL
Okay thanks. At the individual subject level do you have any advice about checking systematically for the extent of vascular artifacts? Rather than subjectively looking at the images, to quantify it is it the number of zeros in each subject's MaskVascular.nii image that I should consider for extent of vascular artifacts? So subjects with more zeros in that image would have more artifact? And then at the population level, should I be looking at MaskVascular_*sd.nii.gz to get a sense of variability?

Tim

Jan Petr

unread,
Oct 28, 2020, 5:46:16 AM10/28/20
to ExploreASL
Looking at the MaskVascular is not a bad idea, but it would not really serve the purpose because it really identifies only the worst outliers and is not the best to look at systematic presence of higher vascular signal.

The best for that is the spatialCoV parameter - see the publication here https://journals.sagepub.com/doi/full/10.1177/0271678X16683690
It divides the spatial SD with spatial mean over the whole brain. It is low for rather homogeneous image that displays mostly perfusion. And it gets high either when there are regions with very low perfusion or many spots with high signal or a combination of both. This typically occurs when there are problems with vascular artifacts and high ATT in either the whole brain or in regions. In the article, there are some thresholds to define good, mediocre and bad spatial CoV (see also here https://www.sciencedirect.com/science/article/pii/S1053811920305176?via%3Dihub). But these might not be universally valid for all studies - and there can be some scanner/sequence/pathology variation. On the other hand, it should be rather consistent across a single study.

The tables with sCoV should be calculated and also placed automatically in the Stats folder. 
Population/Stats/CoV_qCBF_StandardSpace_TotalGM_*_PVC0.tsv

regards,
Jan

Timothy Ellmore

unread,
Oct 29, 2020, 9:26:36 AM10/29/20
to Jan Petr, ExploreASL
Helpful background. Thank you, Jan!

In my output I’m looking in CoV_qCBF_StandardSpace_TotalGM_n=40_17-Oct-2020_PVC0.tsv but I don’t see a column for sCoV. 

I see only the following columns:

SUBJECT LongitudinalTimePoint SubjectNList Site AcquisitionTime GM_vol WM_vol CSF_vol GM_ICVRatio GMWM_ICVRatio WMH_vol WMH_count MeanMotion TotalGM_B TotalGM_L TotalGM_R

In the Stats directory I do see a nice graph where sCoV is plotted, but it would be nice to have access to the numbers in a .tsv so I can figure out which points belong to which subjects. Looks like I have a couple of outliers. 


-- 
You received this message because you are subscribed to a topic in the Google Groups "ExploreASL" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/ExploreASL/YN7VwZXM8d8/unsubscribe.
To unsubscribe from this group and all its topics, send an email to ExploreASL+...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/ExploreASL/107ae837-7eb7-4f08-9f0d-04ce837eda7fn%40googlegroups.com.

Jan Petr

unread,
Oct 29, 2020, 2:16:22 PM10/29/20
to ExploreASL
That's the sCoV 
TotalGM_B
TotalGM_L
TotalGM_R

The sCoV_qCBF only contains the sCoV values (calculated from the qCBF file) - except for the volumes, patient names etc - but all the columns with region names contain sCoV instead of CBF.
You should see that the sCoV values are between 0.2 and 1.5 mostly. Note that they are SD/mean, so always positive.

The nice graph: That's anyway not the spatial CoV itself, but it's relative difference between the first and second half of the scan...

regards,
Jan

Timothy Ellmore

unread,
Oct 29, 2020, 4:36:56 PM10/29/20
to Jan Petr, ExploreASL
Okay, I understand now. 

My mean TotalGM_B across subjects and the two timepoints is 0.633 with a min of 0.432 and max of 0.918 (std=0.131) so I guess according to the JCBFM paper that is definitely on the higher side. 

Thanks,

Tim

Jan Petr

unread,
Oct 30, 2020, 3:19:39 AM10/30/20
to ExploreASL
These numbers look good. I can't comment exactly on the numbers from the top of my head, but below 0.6 you have typically very nice scans without artifacts. 0.6-0.9 would have plenty of them, but are not necessarily too bad. It's above 1 (1.5 or even 3) that are the bad angiographies.

So you indeed have rather typical distribution for older subjects - a bit on the higher side, but rather common for you age group. Note also, that we didn't really study much yet what is the effect of readout, resolution and PLD on the sCoV values.

Reply all
Reply to author
Forward
0 new messages