Original bit-depth of reconstructions

11 views
Skip to first unread message

Gabriel Devenyi

unread,
Oct 23, 2025, 6:14:58 PM (2 days ago) Oct 23
to HCP-Users
Hi,

I was wondering what the original bit-depth is of the data collected is, as it is not documented in the protocols.

I ask, because FSL makes the default choice to upsample to 32bit floats, which results in huge size inflation with just numerical noise. I'm interested in shrinking the data back down to sensible bit-depths.

Thanks.

Glasser, Matthew

unread,
Oct 23, 2025, 6:26:11 PM (2 days ago) Oct 23
to hcp-...@humanconnectome.org

32bit float is a fact of life.  We have rarely found it helpful to try to control this (only with very large matrices have we messed with data precision).  I think it might be a lot of work (and there are certainly algorithms that are not written for other datatypes).


Matt.

--
You received this message because you are subscribed to the Google Groups "HCP-Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to hcp-users+...@humanconnectome.org.
To view this discussion visit https://groups.google.com/a/humanconnectome.org/d/msgid/hcp-users/23f00f9a-e08c-4e5a-aed5-e24c2f8b5880n%40humanconnectome.org.

 


The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail.

Tim Coalson

unread,
Oct 23, 2025, 6:46:00 PM (2 days ago) Oct 23
to hcp-...@humanconnectome.org
I think the original ADC bit depth is either 12 or 16 (its more than 8), so at least while in memory, you can't do better than 50% compared to float32 (without sacrificing performance constantly bitpacking and unpacking), but using float32 makes it trivial to compute and store things like "T1w divided by T2w" without catastrophic rounding problems (or constant fiddling with nifti scale/slope), and when computing in float anyway, writing as float means the next command gets exactly the values that were computed without hiding anything.  I don't know how easily gzip could compress repeating leading zero bits in integers anyway, so I wouldn't expect continuous-valued float32 files to be much more than double in size when compressed, either.

We also tend to avoid lossy compression (like jpeg), so that our statistics are validly traceable to the raw data instead of compression artifacts.  Storage gets cheaper in the long term, possibly at a faster rate than MRI increases in resolution.

Tim


--
Reply all
Reply to author
Forward
0 new messages