Phantoms + QA + commercial solutions

62 views
Skip to first unread message

Raamana, Pradeep Reddy

unread,
Aug 9, 2021, 8:34:20 AMAug 9
to niQC, Richard Mallozzi

Hi Everyone,

 

FYI the talks from two commercial phantom services providers Gold Standard Phantoms and Phantom Lab will happen on Aug 24th 11am EDT to 12.30pm or so (latest schedule is here, with few other exciting talks coming up soon).

 

One question I’d be asking in this session of Q&A / debate to follow the talks would be: are we ready to develop / set a standard or two for routine QA for the well-understood sequences / use-cases? I feel like we may have at least one candidate (fBIRN QA), that can be refreshed/modernized/expanded into the level of full multi-site dataset (such as ABCD). Let’s discuss this and perhaps consider putting it forth in front of OHBM BPC? I’d love to hear your input on any other potential candidates.

 

Thanks,

Pradeep

Raamana, Pradeep Reddy

unread,
Aug 10, 2021, 4:26:15 PMAug 10
to niQC, Richard Mallozzi

Have you guys seen this paper on dynamic phantoms? I’d love to hear this community’s thoughts on this as it appears to be a good step forward.

 

Kumar, R., Tan, L., Kriegstein, A., Lithen, A., Polimeni, J. R., Mujica-Parodi, L. R., & Strey, H. H. (2021). Ground-truth “resting-state” signal provides data-driven estimation and correction for scanner distortion of fMRI time-series dynamics. NeuroImage, 227, 117584. https://doi.org/10.1016/j.neuroimage.2020.117584

 

Abstract:

“The fMRI community has made great strides in decoupling neuronal activity from other physiologically induced T2 changes, using sensors that provide a ground-truth with respect to cardiac, respiratory, and head movement dynamics. However, blood oxygenation level-dependent (BOLD) time-series dynamics are also confounded by scanner artifacts, in complex ways that can vary not only between scanners but even, for the same scanner, between sessions. Unfortunately, the lack of an equivalent ground truth for BOLD time-series has thus far stymied the development of reliable methods for identification and removal of scanner-induced noise, a problem that we have previously shown to severely impact detection sensitivity of resting-state brain networks. To address this problem, we first designed and built a phantom capable of providing dynamic signals equivalent to that of the resting-state brain. Using the dynamic phantom, we then compared the ground-truth time-series with its measured fMRI data. Using these, we introduce data-quality metrics: Standardized Signal-to-Noise Ratio (ST- SNR) and Dynamic Fidelity that, unlike currently used measures such as temporal SNR (tSNR), can be directly compared across scanners. Dynamic phantom data acquired from four “best-case” scenarios: high-performance scanners with MR-physicist-optimized acquisition protocols, still showed scanner instability/multiplicative noise contributions of about 6–18% of the total noise. We further measured strong non-linearity in the fMRI response for all scanners, ranging between 8–19% of total voxels. To correct scanner distortion of fMRI time-series dynamics at a single-subject level, we trained a convolutional neural network (CNN) on paired sets of measured vs. ground- truth data. The CNN learned the unique features of each session’s noise, providing a customized temporal filter. Tests on dynamic phantom time-series showed a 4- to 7-fold increase in ST-SNR and about 40–70% increase in Dynamic Fidelity after denoising, with CNN denoising outperforming both the temporal bandpass filtering and denoising using Marchenko-Pastur principal component analysis. Critically, we observed that the CNN temporal denoising pushes ST-SNR to a regime where signal power is higher than that of noise (ST-SNR > 1). Denoising human-data with ground-truth-trained CNN, in turn, showed markedly increased detection sensitivity of resting- state networks. These were visible even at the level of the single-subject, as required for clinical applications of fMRI.”

Aina Puce

unread,
Aug 10, 2021, 5:13:28 PMAug 10
to Raamana, Pradeep Reddy, niQC, Richard Mallozzi
Thanks for sharing Pradeep - this looks like a cool paper. Aina

--
You received this message because you are subscribed to the Google Groups "niQC" group.
To unsubscribe from this group and stop receiving emails from it, send an email to niqc+uns...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/niqc/DM6PR04MB67467D2496FE3638ACCBD510B8F79%40DM6PR04MB6746.namprd04.prod.outlook.com.


--
Aina Puce, PhD . 

A button with "Hear my name" text for name playback in email signature 

Eleanor Cox Riggs Professor
Psychological & Brain Sciences
Programs in Neuroscience, Cognitive Science
Affiliate, Indiana University Network Science Institute
Indiana University, Bloomington IN 47405 USA
--
Chair-Past, Organization for Human Brain Mapping
Associate Editor, Perspectives on Psychological Science
--
email: aina...@indiana.edu
Twitter: @aina_puce
tel (cell): +1 812 650-2213

Raamana, Pradeep Reddy

unread,
Aug 12, 2021, 6:08:12 PMAug 12
to Aina Puce, niQC, Richard Mallozzi, liliann...@stonybrook.edu

Indeed it is – an important point from this paper (and another one below) that made me think is that there is lack of standards for even for something as simple as SNR i.e. it appears SNR values based on static phantoms (basis for QA) are not comparable across sites [strictly speaking], because of scanner-induced noise and drift (even within the same session) etc. Love to hear what you all think about it.

 

It is not clear to me if the fBIRN community appreciated this sufficiently while making their QA recommendations back in 2012 (will read that again to see if they touched upon it).

 

Welvaert M, Rosseel Y (2013) On the Definition of Signal-To-Noise Ratio and Contrast-To-Noise Ratio for fMRI Data. PLOS ONE 8(11): e77089. https://doi.org/10.1371/journal.pone.0077089

 

cc-ing Dr. Lilianne Mujica-Parodi as one of the authors on the dynamic phantoms paper.

Todd Constable

unread,
Aug 14, 2021, 2:45:03 PMAug 14
to Raamana, Pradeep Reddy, Todd Constable, Aina Puce, niQC, Richard Mallozzi, liliann...@stonybrook.edu
Keep in mind there is a difference between SNR (signal in a phantom vs standard deviation of background thermal noise) and temporal SNR which incorporates drift. 
fBIRN did consider both of these and had separate measures (SNR and tSNR). Temporal is really important in things like fMRI but clinically doesn’t really matter for anatomic scanning. With fMRI the gradient shims used to heat up in long EPI runs and there would be quite a bit of temporal drift that could really reduce the quality of the fMRI task data - this was part of the motivation for short blocks of on/off task runs…

R. Todd Constable, Ph.D.
Professor of Radiology and Biomedical Imaging, BME, Neurosurgery
Director MRI Research
Yale University School of Medicine
The Anlyan Center
300 Cedar Street
PO Box 208043
New Haven, CT 06520-8043
Website: http://mri.med.yale.edu











Thomas Nichols

unread,
Aug 15, 2021, 5:57:49 AMAug 15
to Todd Constable, Raamana, Pradeep Reddy, Aina Puce, niQC, Richard Mallozzi, liliann...@stonybrook.edu
Sorry for being so quiet on this thread.  I echo Todd's point, but since drift is routinely accounted for with either explicit linear modelling or filtering, there should presumably be multiple tSNR measures, raw and those that discount drift.  FWIW, in my explorations with comparing different task fMRI results from AFNI/FSL/SPM on the same data, of all the methodological differences, the exact form of the drift model seems to have almost no impact on results. (Though, of course, the choice of flexibility or band pass parameter will probably have an impact).

LR Mujica-Parodi

unread,
Aug 15, 2021, 7:50:08 AMAug 15
to Thomas Nichols, Todd Constable, Raamana, Pradeep Reddy, Aina Puce, niQC, Richard Mallozzi, Helmut Strey

I just want to point out that the results shown in our two dynamic phantom papers are in response to the limitations of tSNR, a measure that we've actually shown to de-optimize for detection sensitivity to resting-state networks. 








Lilianne R. Mujica-Parodi, Ph.D.
Director, Laboratory for Computational Neurodiagnostics (LCNeuro)
Professor, Department of Biomedical Engineering
Stony Brook University School of Medicine
Health Sciences Center T8-050
Stony Brook, NY  11794-5281
Office Phone:  631-371-4413
Mobile:  631-428-8461
LAB WEBSITE:  www.lcneuro.org

Open MINDS Lab at Pitt, PI Pradeep Raamana

unread,
Aug 18, 2021, 10:31:51 AMAug 18
to niQC

Thanks Tom - can you point us to any specific papers/preprints if there are any on this?

Thomas Nichols

unread,
Aug 18, 2021, 10:43:25 AMAug 18
to Open MINDS Lab at Pitt, PI Pradeep Raamana, niQC
Hi Pradeep,

It's in submission, preprint below.  Mentioned on last paragraph of page 18.

-Tom

Bowring, A., Nichols, T. E., & Maumet, C. (2021). Isolating the Sources of Pipeline-Variability in Group-Level Task-fMRI results. bioRxiv. Retrieved from https://www.biorxiv.org/content/10.1101/2021.07.27.453994v1


Todd Constable

unread,
Aug 18, 2021, 12:44:23 PMAug 18
to Thomas Nichols, Open MINDS Lab at Pitt, PI Pradeep Raamana, niQC
Great! Thanks Tom!

Sent from my iPhone

On Aug 18, 2021, at 9:43 AM, Thomas Nichols <thomas....@bdi.ox.ac.uk> wrote:



Stephen Strother

unread,
Aug 21, 2021, 1:15:06 AMAug 21
to liliann...@stonybrook.edu, Todd Constable, Raamana, Pradeep Reddy, Aina Puce, niQC, Richard Mallozzi, Helmut Strey, Thomas Nichols
I am rather behind here but would like to correct what I believe is a misunderstanding about the fBIRN SNR parameters. There is no direct measure of tSNR as defined at least in the Kumar et al., paper (https://doi.org/10.1016/j.neuroimage.2020.117584) , as mean temporal signal/temporal SD around that mean. In Friedman's 2006 papers (https://doi.org/10.1016/j.neuroimage.2006.07.012 ; https://doi.org/10.1002/jmri.20583), tSNR is not mentioned, and SFNR, as defined below by fBIRN, is clearly not tSNR since the noise term is the residual over time after 2nd order detrending.

"Region of interest (ROI) is by default a 15 ×15 square centered on the middle slice through the phantom.
Signal image is the mean intensity across time by voxel.
Temporal fluctuation noise image : first, voxel time-series is detrended with a 2nd order polynomial. The fluctuation noise image is the standard deviation (SD) of the residual by voxel.
signal-fluctuation-to-noise ratio (SFNR): Signal image divided by temporal fluctuation noise image by voxel. Summary SFNR value is the average of this within ROI.

When we recently looked at the fBIRN measures over 4ish years of data from 13 scanners (https://doi.org/10.1016/j.neuroimage.2021.118197) we found that SFNR and SNR tended to be very highly correlated (> 0.9) suggesting that, at least in phantoms static voxel SNR and detrended voxel residuals are very similar. The one exception was due to subtle individual slice instabilites, which we showed can impact on rs-fMRI connectivity measures on an individual subject basis. Furthermore, SFNR and SNR are close to orthogonal to fBIRN's other temporal SNR measure of percentFluc (see definition below). We found that percentFluc tended to be moderately to highly correlated with drift and driftfit, i.e., driven by low frequency fluctuations.

"For the next variables: a time-series composed of the mean intensity of each volume within the ROI (i.e., 15 × 15 square centered on the middle slice from each volume) is calculated ( “raw signal ”), and a 2nd order polynomial trend is fit to this data ( “fit ”).
msi is mean signal intensity of the raw signal.
std SD of residuals after detrending
percentFluc 100 ∗ ( std )/( msi )
drift 100 ∗ (max raw signal - min raw signal)/ msi
driftfit 100 ∗ (max fit - min fit)/ msi"

Therefore, it seems that the standard fBIRN parameters successfully split the temporal SNR structure into a static component and a very low frequency detrend component, with not clear evidence for other sources beyond the slice instabilities we described, which are partly addressed by some forms of censoring. I am not sure how this impacts on the points made in Kumar et al., as I have yet to fully read the paper and understand the role of detrending-like preprocessing in their measurements. But for now, with the addition of measurement of slice instabilities I am unaware of evidence that more than the fBIRN parameters are needed to characterise temporal SNR in static phantom studies, I guess supporting Gary Glover's response to Pradeep.

In human fMRI task data sets our work on optimising preprocessing choices showed that optimal polynomial detrending choices tended to be highly variable across subjects, but were relatively low for block designs (median 0 or 1) and higher for single event designs (median 4 or 5) (See Table S2: https://doi.org/10.1371/journal.pone.0131520), but there are many other interacting temporal sources and preprocessing steps so hard to generalise from phantom measurements.

Cheers, Stephen
To view this discussion on the web visit https://groups.google.com/d/msgid/niqc/CAMt3WqqD%2B2bh9ytJfojUj7D2aaf9WNYMSgWvzgxNRQbSDW5qkQ%40mail.gmail.com.


-- 
----------------------------------------------------------------------------------------------------------------
To reduce e-mail overload follow the e-mail charter of Chris Anderson: http://emailcharter.org/.

Stephen Strother, PhD
Senior Scientist, Rotman Research Institute, Baycrest
Professor of Medical Biophysics, University of Toronto
E-mail: sstr...@research.baycrest.org
Tel. Office: 416-785-2500 x2956
Fax: 426-785-2862

Open Minds Lab

unread,
Sep 13, 2021, 4:42:51 PMSep 13
to niQC
Thanks Stephen for the detailed notes/argument - I'll bump this up for comments from the group.

I'd also like to quickly update Dr. Thomas T. Liu (UCSD) has agreed to give an educational talk on "Noise contributions to fMRI signal: an overview", which makes an important point, and adds a deeper non-hardware dimension to this debate -- it is that "the line between signal and noise is not always clear", and key cause of this challenge "stems from the fact that the brain, which is the object of study, is the generator of both the signals of interest and noise." (see graphic below) 

This is an important and interesting confounding challenge to account for, as he notes "it will be important for the field to remain mindful of this issue and to develop new analysis approaches that more effectively consider the interaction between signal and noise."

Screen Shot 2021-09-13 at 4.04.50 PM.png

Thanks,
Pradeep
Reply all
Reply to author
Forward
0 new messages