fMRI, ALFF, ReHo Maps validation

58 views
Skip to first unread message

Óscar Peña-Nogales

unread,
Mar 31, 2021, 7:04:50 AM3/31/21
to cpax_forum
Dear all,

I'm kind of newbie with fMRI data and its preprocessing pipelines, and so I'm using CPAC for the first time. After some time, I was able to run a whole CPAC pipeline on a test subject, but now I need to validate that the output make sense.

This is, that all the preprocessing steps actually lead to a better image and that the computed maps (ALFF and ReHo) make also sense (they are within realistic values and so on).

And the problem is that I have no idea of how to do this, so I would really appreciate it if anyone could give me a hint.

Thank you very much.

sgia...@gmail.com

unread,
Mar 31, 2021, 6:01:31 PM3/31/21
to cpax_forum
Hi Óscar,

Welcome to the community and to CPAC!

This is a solid, multi-faceted question the field discusses and debates on an ongoing basis. In fact, part of CPAC's design and its configurability is intended to allow researchers to run many preprocessing strategies and evaluate the impact such decisions have on the resulting data. Although some preprocessing steps have reached consensus in regard to their positive effect, many other possibilities are in continual debate and study.

Some further questions to keep in mind as we try to clarify what you're looking to do:
  • How would you define a "better" image?
  • What is your eventual goal for the processed data? What type of post-processing, statistical analysis, group-level analysis, etc. do you hope to perform, if any?
  • To what level of validation would you feel comfortable with, regarding how realistic the values of computed maps are? Benchmarking these against the same outputs from other packages? Or are you hoping for the more ambitious aim of converging on a comparison against some type of ground truth?
But these are bigger questions that may not have any immediately satisfying answer. To help you get started with some first steps, some things you could possibly try:
  • Running similar preprocessing/analysis either by hand with AFNI/FSL tools or using another fMRI pipeline package, and calculating correlations between those outputs and CPAC's outputs (we regularly run analyses like these). However, this is comparing one pipeline to another and is not any guarantee of ground truth.
  • Doing the same as above, but with already-generated outputs provided by a lab, packaged together with a publication.
  • Visually inspecting key results: for example, does the BOLD/functional data that has been registered to template space cleanly overlap with the template you used? Do the T1 and BOLD brain masks overlay with the original data cleanly? Registration quality, in our experience, tends to be one of the biggest drivers in output data variability.
  • Inspecting the values in the z-score-standardized versions of the outputs. z-scoring expresses the individual scores in terms of where they sit with respect to the overall distribution - in this case, for example, the z-scored ALFF output will provide a value for each voxel, and this value will be the number of standard deviations away from the mean for that voxel's original/raw ALFF value. So possibly, if you are seeing extreme values in the z-scored outputs (maybe something like outside the range of [-3, 3]), this could be a potential sign that you're getting some volatile results. But not a guarantee.
Please let me know if this helps - I'd be glad to help you along as you try things out.

Best of luck,

Steve

Óscar Peña-Nogales

unread,
Apr 1, 2021, 2:21:23 AM4/1/21
to cpax_forum
Hi Steve,

Thank you very much for your timely response. To put you in context, I come from the DWI world where the different post-processing steps are mainly assessed visually and by checking the diffusion maps are within reasonable values.
So let me now answer to each of your bullet points individually.
1)
  • For example, a "better" tractography, would be a tractography less artefacted and with a better resemblance to the brain of the subject. In fMRI, my guess would be that the image is not distorted, properly segmented, and appropriately denoised. This latter thing I've heard is probably the most controversial one, so there should be a tradeoff between what is/what is not noise and how much shall we 'remove'.
  • Currently, my main goal is the correct and accurate computation of f/ALFF, ReHo, and SCA.
  • My thought was comparing the outputs of my pipeline with CPAC to the outputs of other packages, but this would just be a 'silver-validation'. My ultimate goal would be a 'golden-validation' where I do a direct comparison with a ground-truth. But as far as I know, that is not possible.
2)
  • This was my first idea. Although I think not all software packages carry out the same steps, so maybe correlations might not be that high. Also, what would be a reasonable threshold of a high correlation? 
  • In this case, I would need their pipeline to be exactly the same as mine (plus all their data to be available).
  • Clearly the registration/segmentation ones are ease the check, but how would you confirm you applied the correct denoising, smoothing...?
  • Thanks for suggesting this one, I hadn't thought of z-score in this way. As for the maps, I know the range of ALFF depends on the data, and that f/ALFF and ReHo should be between 0 and 1, right?

Also, in case it helps in any way, this is the fmri pipeline I had thought of (it's interconnected to the anatomical preprocessing steps at several points, but just to give you an idea): Remove first 10 volumes, despike, slice time correction, motion correction, de-noising (still don't know what to correct for and how), intensity normalization, temporal filtering, maps computation, z-score normalization, smoothing and a final transformation to the atlas space.

Please, as I mentioned in the first e-mail, I'm newbie in the fMRI world, so please don't hesitate to correct me if I'm wrong about anything.

Thank you very much,
Óscar

sgia...@gmail.com

unread,
Apr 1, 2021, 5:05:58 PM4/1/21
to cpax_forum
Hi Óscar,

Ah I see - yes, I have several friends/colleagues in the DWI world and I've seen the difference in approach to assessing data. I can see your questions in a better context now.
I hope others will chime in, as this is a deeply faceted and worthwhile conversation to have.

Best,
Steve
Reply all
Reply to author
Forward
0 new messages