Thresholds for New "Bad Slices" Metric

Skip to first unread message

Patrick Sadil

unread,
7:14 AM (3 hours ago) 7:14 AM
to DSI Studio
Dear DSI Studio Developers,

One aspect of the paper "Differential tractography as a track-based biomarker for neuronal injury" (Yeh et al. 2019) that I found most useful is the provision of thresholds for quality-control metrics. For example, in that paper, scans were rejected if more than 0.1% of the slices were flagged. Of course, that can only be a guideline, but it helps for interpreting these metrics when seeing them for the first time.

Since then, it seems the implementation of the bad slices metric has shifted from a correlation-based approach to this: https://github.com/frankyeh/DSI-Studio/blob/3d5fd1565c21e8c0f78e814c0d9fcfc570520bcf/libs/dsi/image_model.cpp#L525-L554 (please let me know if I am mistaken). Are there updated guidelines for what constitutes an acceptable dataset?

Thanks,

Frank Yeh

unread,
9:26 AM (27 minutes ago) 9:26 AM
to psa...@gmail.com, DSI Studio
I would recommend using Neighboring DWI Correlations (NDC) to identify potential outliers. Specifically, you can calculate the NDC for all scans (one for each) within your dataset and then apply an outlier detector to those values to determine which datasets may be problematic.

Best regards,
Frank


--
You received this message because you are subscribed to the Google Groups "DSI Studio" group.
To unsubscribe from this group and stop receiving emails from it, send an email to dsi-studio+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/dsi-studio/1e16dd78-b476-4d24-aa76-0f27b7997e8en%40googlegroups.com.
Reply all
Reply to author
Forward
0 new messages