> Please, explain differences between float and double data types.
Please, read K&R2, page 36.
--
Richard Heathfield <http://www.cpax.org.uk>
Email: -http://www. +rjh@
Google users: <http://www.cpax.org.uk/prg/writings/googly.php>
"Usenet is a strange place" - dmr 29 July 1999
Thanks a lot.
What is the reason for this? I have seen similar suggestion elsewhere,
but I also saw an opposing suggestion a few days ago (that double
computations are expensive, so if float gives enough accuracy for your
needs, you should use float). The second argument seems convincing to
me, so could you please clarify your view. Thank you.
Since in the vast majority of cases float is promoted to double anyway, all
you're doing by using float is taking up clocks on the conversion. You
might just as well start with double and save yourself some processing
power. In any case, modern desktop machines (which is what anyone asking
this question is using!) tend to have these little widgets in them called
mathematics co-processors, which make double precision arithmetic
blitzingly fast.
Note that this was advice to beginners (though I think it is good advice
in general) Choosing to use float rather than double is the kind of
micro-optimisation that should be reserved till such time as its value
becomes significant.
The choice to use either long double or float rather than double is one
that should be based on a degree of understanding of the consequences
that I would not expect for a novice or incidental programmer.