Hi Everyone,
Here is some designs for the “Just QC It” I was able to produce with some artists online – let me know if they look appropriate. Let me know if you want to print them - I can share the source files. We’d like neuroscientists everywhere wear them to promote QC in their world.
We’ll have a few more coming saying:
Quality Matters
I’m a Quality Champion
Please feel share your ideas for slogans or merch designs.
Ideally, we will be setting up something online where people can order the merch with a few clicks choosing colors and sizes etc (let me know if you want to help with that). We may distribute them at next OHBM and INCF meetings.
Thanks,
Pradeep
Assistant Professor,
Department of Radiology,
University of Pittsburgh School of Medicine.
Lab : openmindslab.com
Blog: crossinvalidation.com
Some here might wonder why we need this, here are two short stories/examples to potentially justify the need for promoting QC and QA more broadly:
Thanks,
Pradeep
--
You received this message because you are subscribed to the Google Groups "niQC" group.
To unsubscribe from this group and stop receiving emails from it, send an email to niqc+uns...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/niqc/DM6PR04MB6746FFA0001CC6969CC78120B8A99%40DM6PR04MB6746.namprd04.prod.outlook.com.
Hi all,
I actually enjoy pushback like that from PI’s or others, as it gives me a chance to think carefully about the big picture and hone the most convincing reasons why we need this. It’s important to have solid, easily demonstrated. reasons why we would want to do this as there are many who are skeptical – especially since it involves extra work for everyone. I would have started with just saying that there are many sources of differences between scanners and instabilities, and that these influence results. Having these at least measured and reported in an objective way will allow us to get a handle on their relative significance and work on ways to mitigate them, thus helping push the field as it increasingly depends on sharing data.
Peter
To view this discussion on the web visit https://groups.google.com/d/msgid/niqc/CANZiqKDm%3DoSMmPeVKYHDU7bUDz6GVFofjcn3qXVeacn5wVj6UA%40mail.gmail.com.
also, talking about being mindful of extra work involved in QC reminds me of the similar argument and resistance people have been making to share one’s data and/or code. I guess we do have some miles to hike to increase awareness of the need to QC/QA to be on the same plane as sharing data/code, which is just getting normalized.
as I see it, “QC being lot more work” is a manufactured problem, resulting from data not being as openly shared as possible i.e. as I noted before, even with public datasets, given the Do Not Redistribute clause in the Data Usage Agreements*, whatever little QC some labs/institutes do have to keep it for themselves. If we remove that barrier, and allow crowd-sourcing on fully open datasets, any QC needed for that dataset+analysis combination needs to be done only once. Even if we had to redo QC with a different criterion, we don’t have to start from scratch etc.
even to show that QC doesn’t matter or is not worth the effort requires that we do it and compare the results between with and without QC. From my very biased point of view, all roads lead to the need to acceptable QC! 😊
*I did reach out to some folks at NIH requesting to reconsider this, but looks like they need more famous scientists with a lot of $$$$ on their CV talk to them. I will pick it up another time when I am able to.
From:
Bandettini, Peter (NIH/NIMH) [E] <band...@mail.nih.gov>
Date: Wednesday, September 29, 2021 at 6:04 PM
To: Fidel Alfaro Almagro <fidel.alfa...@gmail.com>, Raamana, Pradeep Reddy <RAAM...@pitt.edu>
Cc: niQC <ni...@googlegroups.com>
Subject: Re: [niQC] Re: t-shirt designs
Hi all,
I actually enjoy pushback like that from PI’s or others, as it gives me a chance to think carefully about the big picture and hone the most convincing reasons why we need this. It’s important to have solid, easily demonstrated. reasons why we would want to do this as there are many who are skeptical – especially since it involves extra work for everyone. I would have started with just saying that there are many sources of differences between scanners and instabilities, and that these influence results. Having these at least measured and reported in an objective way will allow us to get a handle on their relative significance and work on ways to mitigate them, thus helping push the field as it increasingly depends on sharing data.
Peter
From:
Fidel Alfaro Almagro <fidel.alfa...@gmail.com>
Date: Wednesday, September 29, 2021 at 5:27 PM
To: Raamana, Pradeep Reddy <RAAM...@pitt.edu>
Cc: niQC <ni...@googlegroups.com>
Subject: Re: [niQC] Re: t-shirt designs
To view this discussion on the web visit https://groups.google.com/d/msgid/niqc/CANZiqKDm%3DoSMmPeVKYHDU7bUDz6GVFofjcn3qXVeacn5wVj6UA%40mail.gmail.com.
'That Qc might not matter when doing certain stats (like group differences on large samples)'
i think that's exactly when it can matter - let's take Yarik's paper showing that part of the measurement is explained by SNR. Now imagine your large population single site with some inhomogeneous sampling over time (which by the way is often the case, like when you are in dry patients season, you do the controls) then any changes in the scanner will create uncrontrolled differences (might average out, might not, we simply do not know)
this allows me piggy back on the project of getting QC across multiple centres and apply on running studies ... with the goal being exactly that kind a situation, i.e. not QC for the scanner, but QC to regress stuff out at the group level.
Cyril
To view this discussion on the web visit https://groups.google.com/d/msgid/niqc/DM6PR04MB67465146AF81D9F5B2D867E0B8A99%40DM6PR04MB6746.namprd04.prod.outlook.com.
-- Dr Cyril Pernet, PhD, OHBM fellow, SSI fellow Neurobiology Research Unit, Building 8057, Blegdamsvej 9 Copenhagen University Hospital, Rigshospitalet DK-2100 Copenhagen, Denmark wamc...@gmail.com https://cpernet.github.io/ https://orcid.org/0000-0003-4010-4632
Great points Cyril! You don’t need to convince me that QC/QA are important in every analysis – we need to spread the message to the unbelievers 😊. I was referring to a rather basic scenario, with an unstated qualifier “all else being okay” i.e. no site differences etc.
PS: In fact, I’ve been saying for some years now there is not much point in running mere [classic] group differences analyses at all, without further analyses/stats to evaluate the potential for the actual goal (e.g. biomarker predictive utility with out of sample evaluation).
Hello
I guess, every one in this list is convince that QC matter, and I do for sure, but let me try to explain the view of of other who do not. I can see the point that large samples is the simplest way to still have results (event without QC). You just need to have a sample large enough to see an effect.
If the QC is controlled (ie the level of noise is know) one can
then predict the minimal size needed to see a difference. But
today we do not perfectly control the noise level, and so we just
try and test to have a big enough sample size
The only things one need to check is to avoid any bias QC corruption between group. Let say you do not work with a group that is more prone to motion, then you just have scanner related noise to take care, and it is very likely that this scanner noise will not correlate with your group. (Just avoid to scan one group on a specific scanner or with a specific sequence !)
So in this scenario (which is the majority of studies) QC is only
important to reduce the sample size, and so with large samples it
does not matter ...
Now let see what happen if you want to study a pathology where
patient are more prone to motion. There is a problem, since we
know motion will bias the quantitative volumetric information we
get from a anat scan. But can we do a QC that will allow us to
regress out this bias ... I am not convince ...
The QC gold standard is still today the visual inspection of the data. So you have to rely on an expert that look at the data and make a binary decision this is good or not ! Approach like mriqc or biobank gives you a QC score, but this comes from a classifier that learns the expert binary choice, so I do not see how this can become quantitative.
Especially with motion, where the level of artefact is a
continuous variable, we just do not know (today) how to quantify
it and how it is related to the specific end results (we just have
evidence, that there is an effect ...)
There may be improvement, and that is why we need to work on QC.
(I try to work on simulated motion artefact (on anat), in order to
learn a quantification of motion artefact severity, but I was not
successful yet ... I also like the spm cat12 approach that give a
quantitative QC metric (related to contrast to noise ratio), but
more work is needed to understand how it related to the end
results you want to compare. Even those methods are not perfect,
it is a good way to go, to control if there is any difference of
QC between groups
Last point whenever single-subject analyses are needed, I am
afraid, MR is still not a quantitative measurement. Even for
simple volumetric information measured from an anat MRI, I am not
aware of any software that can give me a Volume AND the error (or
the confidence interval) V = xx +- e. It seems obvious that this
error is dependent on noise, contrast, motion, artefacts ect ...
but can we predict it ? There are very interesting work on this
topic with deep learning strategy, where the model try to learn a
prediction and the uncertainty of this prediction. This is a great
importance in all applications, and we need one for MRI !
So to summarize my point here, I do think QC matter, but we need to improve how we do QC ....
Romain
To view this discussion on the web visit https://groups.google.com/d/msgid/niqc/DM6PR04MB6746DEF821B8B383D932EBD9B8AA9%40DM6PR04MB6746.namprd04.prod.outlook.com.