Thanks for the details. I didn't mean to particularly criticize
half-floats, I realized they were probably chosen with the GPU backend
in mind. I just immediately think of something else when I think of
'16-bit per pixel color', as in TIFF, etc.
Anyway, I think for what I'm doing the difference between 8 and 10
bits isn't really worth it. I would slightly argue with your 11-bit
calculation, not because the numbers don't exist but because the ULP
spacing is not uniform, you only have 10 bits of exponent=2^-1 range
(0.5 to 1), and then you get more as the numbers halve, because the
exponent shifts, and then a bunch more when you get to denorm but
these are tiny numbers which are useless for pixels. I don't see the
exponent-ness of floating point as a particularly good fit for linear
(or even gamma corrected) RGB, I would think you more or less always
want equal spacing. Also, not sure about GPUs but denorm floats are
usually a massive performance hit on x86 hardware.
Thanks again,
Dean