World space specified in floating point results in a distribution of
coordinates with nonuniform granularity vs using integer which results
in uniform granularity.
It is difficult to visualize the locus of points that are defined by
floating point. First imagine a one dimensional space. For example,'a
number line' ( see below ). An integer number line is easy to
visualize. A floating point number line would look logarithmic. Now
use that floating point number line in each of X Y and Z directions
and then you can see this odd subdivision of world space when using
floating point.
So, back to the original question of this post. Why use floating point
in typical 3D programming ? Why subdivide space in this logarithmic
manner ? Also floating point has fewer significant digits than
integer. Also by using float a GPU has a reduced 'operations per
second' versus if it used the same amount of silicon to perform
integer math. Float is needed when very large dynamic ranges are
needed. Is that really necessary in typical opengl use ?
So, why is floating point used ?
Thanks
* example of a float number line:
- imagine an unsigned 4bit float, 2 bits for the exponent, 2 bits for
the mantissa
- the 16 possible numbers in decimal are:
0,1,2,3
0,2,4,6
0,4,8,12
0,8,16,24
on a number line you would have:
0,1,2,3,4,6,8,12,16,24
And that is the nonuniform granularity subdivision of space that I am
asking about.
> Float is needed when very large dynamic ranges are
> needed. Is that really necessary in typical opengl use ?
For object coordinates, OpenGL doesn't know in advance the range of
values. E.g. in practice, it's not unreasonable to use coordinates
in metres for modelling anything from a solar system to a molecule.
For intermediate results, perspective division will eliminate any
linearity in the coordinate system.
For data formats, it's not that uncommon to use integers if you know the
range, e.g. modelling an objects using 16-bit (or even 8-bit) integers for
the vertex coordinates then specifying the bounding box (or
alternatively a transformation) in floating point. But this is all going
to get converted to floating-point internally anyhow.
a) You'd need an awful lot of bits in an integer
to represent coordinates from (eg.) 1/1000 of a
millimeter up to the size of the solar system.
b) Floats aren't linearly distributed across the
range in (a), true, but percentage error is more
important than distribution. When you're viewing
an object you want the visual error to be "a
fraction of a pixel".
A real world error like "one millimeter" would only
be meaningful if you're viewing something which is
(eg.) a couple of meters across. It would be completely
useless for viewing the parts inside a watch and it's
overkill for viewing a planet.
b) 3D graphics is more then just object coordinates.
At some point in the graphics pipeline you need
floats, eg. rotation matrices have values derived
using sin() and cos() functions. You'd need an
awful lot of bits in your integer to be able to
represent both useful object measurements *and*
accurate rotation matrices as well.
In the end, floats solve far more problems than
they create. Sure, you can't model an apple orbiting
a planet using the same coordinate system but you
shouldn't ever need to. Model the apple in apple
coordinates and the planet in planet coordinates
and use the world matrix to transform each thing
into the correct position when rendering.
> World space specified in floating point results in a distribution of
> coordinates with nonuniform granularity vs using integer which results
> in uniform granularity.
How do you figure that?
What's the smallest number you can add to
1.0 which will change it?
What's the smallest number you can add to
1000.0 which will change it?
(assume single precision floats...)
Read this. http://en.wikipedia.org/wiki/Floating_point
The first paragraph explains it all. Floating points are used to represent
numbers that are too big or two small to be represented by integers.
Most commonly used today is IEEE 754 format.
http://en.wikipedia.org/wiki/IEEE_754
I really don't want to explain what is already explained in those links
however, but just let it be said that floating point does not give a total
even distribution. There's a point where you add 1 to a floating point
number, and the result is not the floating point number + 1 but some other
number because of how floating point values are stored.