I've searched for a definitive solution but couldn't find one. So I
came here...
Can I use a floating point number (either single or double precision)
as the key for a map (and it work correctly)?
I'm aware of the issues with floating point equality but as the map is
doing equivalence (strict weak ordering) wondered whether it would
just work or will I have to write my own compare functor.
Thanks
AJ
--
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
[ comp.lang.c++.moderated. First time posters: Do this! ]
I do not know all the tricks of floating point numbers, but the ones I
know from other people in that forum, that
the comparison of two close values may give different result at
different times due to fairly "random" reasons (like whether the
numbers are stored in registers or RAM), I am pretty sure that map of
floats with default comparison function is a bad idea. A custom
functor may be a solution, but it surely cannot be a function like:
bool less( double x, double y ) {
return x + EPSILON < y;
}
This is discussed in http://cpp-next.com/archive/2010/02/order-i-say/
Look for Sean Parent's comments.
Regards,
&rzej
Despite the drivel that's written in newsgroups, equality of
floating-point values is well defined and useful. Don't be afraid of it.
--
Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of
"The Standard C++ Library Extensions: a Tutorial and Reference"
(www.petebecker.com/tr1book)
It will work, in the sense that the compiler will be completely happy
with the code and you are not invoking the demons of UB when you use
float or double as your key.
Just be aware that finding a particular key might not work as you
expect, due to differences in precision/rount-off error between the
stored key and the value you are trying to match it with.
>
> Thanks
>
> AJ
>
Bart v Ingen Schenau
Define "correctly". You will have numbers that are almost indistinguishable
from each other but compare unequal.
> I'm aware of the issues with floating point equality but as the map is
> doing equivalence (strict weak ordering) wondered whether it would
> just work or will I have to write my own compare functor.
The only values that would not work correctly would be NaNs. IIRC, the
following holds:
NaN < x == false
x < NaN == false
(x==Nan) == false
Uli
--
Sator Laser GmbH
Geschäftsführer: Thorsten Föcking, Amtsgericht Hamburg HR B62 932
As usual with floating-point comparison, it will work, but might not
give you what you need :-).
It's easy to see why: you obtain one key through a calculation. You
obtain another and it's close enough to the first that it should be
considered same as the first. But since it's not __exactly__ the same,
you'll e.g. end up with two key-value pairs in the map. The question
is simply: at what point two close values are one?
Goran.
So long as the comparison defines a strict weak ordering then it'll
work correctly, no matter the type.
So, does floating point < define a strict weak ordering?
If you avoid NANs (and as -0.0 and +0.0 behave as equals when compared)
then yes.
Louis.
In other words, finding a particular key might not work if you instead
look for a different key.
--
Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of
"The Standard C++ Library Extensions: a Tutorial and Reference"
(www.petebecker.com/tr1book)
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
Or, to put it the other way around, you will have numbers that are not
equal and compare unequal.
As always, the issue is not whether equality means equality, but whether
the programmer knows enough about the source of the numbers to
understand whether they should be equal.
It's really not that much different from this:
int a = 1/3;
int b = a * 3;
assert(b == 1);
Gasp! The assert triggers! == isn't doing what it's supposed to do.
The difference is that most programmers have learned the rules for
integer arithmetic. Far fewer have learned the rules for floating-point
arithmetic, and instead resort to incantations.
--
Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of
"The Standard C++ Library Extensions: a Tutorial and Reference"
(www.petebecker.com/tr1book)
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
Good point.
--
Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of
"The Standard C++ Library Extensions: a Tutorial and Reference"
(www.petebecker.com/tr1book)
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
> If you avoid NANs (and as -0.0 and +0.0 behave as equals when compared)
> then yes.
This was kinda the answer I was hoping for but the responses are far
from conclusive.
Is this in the standard anywhere?
I agree with the general opinion that without knowledge of what the
floating point number represents it pretty difficult to determine how
they should logically compare (what constitutes representation error
vs actual difference).
I might err on the side of caution and decide on an epilson tailored
to my specific problem.
Thanks for all the responses
AJ
--
But what about this?
double a = 1; double b = 3; assert(a/b == 1.0/3.0);
This isn't just people not understanding floating point,
it's the language definition deliberately conspiring
against making it be understandable.
(For the uninitiated, the problem is 5/10.)
--
I don't get what you mean here. To me, a float represents a float.
I think you should decide what type of number you're trying to model
and then ask if floats can model (or be made to model) that type, not
the other way round.
> I might err on the side of caution and decide on an epilson tailored
> to my specific problem.
Using epsilon will almost guarantee you don't have a strict weak order.
That is if floats differing by no more than epsilon[1] compare equal
then you no longer have transitivity of equivalence.
Louis.
[1] Unless epsilon is 0, of course.
Sort of. IEEE-754 prescribes this behavior, and numeric_traits<T> has a
flag that tells you whether a particular type supports IEEE-754. C++0x
goes quit a bit further in this regard, but mandating IEEE-754 just
wouldn't fly because it would be prohibitively expensive on systems that
don't support it in hardware.
--
Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of
"The Standard C++ Library Extensions: a Tutorial and Reference"
(www.petebecker.com/tr1book)
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
What I mean is, say I have a sensor producing a continous value to
within some error/tolerance then I can set Epsilon to a value that
appropriately distinguishes between what is error due to floating
point representation and what represents an actual change in input.
> That is if floats differing by no more than epsilon[1] compare equal
> then you no longer have transitivity of equivalence.
I wasn't really referring to numeric_limits::epsilon, more to a custom
value (much larger than epsilon<float>) tailored to my specific
problem.
AJ
>
> Louis.
>
> [1] Unless epsilon is 0, of course.
{ edits: quoted banner removed. please keep readers in mind. -mod }
This is fine if you know that the values will be well separated but may
fail if distinct actual values approach too closely.
> > Can I use a floating point number (either single or double precision)
> > as the key for a map (and it work correctly)?
>
> So long as the comparison defines a strict weak ordering then it'll
> work correctly, no matter the type.
>
> So, does floating point < define a strict weak ordering?
>
> If you avoid NANs (and as -0.0 and +0.0 behave as equals when compared)
> then yes.
Assuming well-behaved arithmetic.
Non-IEEE-754 arithmetic tends to have badly-behaved areas.
Compilers -- or code -- that mix precisions can cause
problems here.
BTW, I know that IEEE specifies that -0.0 == +0.0,
but does it specify that -0.0 is not less than +0.0?
In the old days, before IEEE, hardware that
had +0.0 and -0.0 often had -0.0 < +0.0.
Neither was I.
Louis.
Interesting point. But this, in itself, doesn't fix the problem that a
fuzzy comparison function doesn't induce a strict weak ordering. As
Francis said, if the values are far enough apart, then you do get a
strict weak ordering. (On the other hand, if they're that far apart,
then you don't need the fuzzy comparison.) Another possibility would be
to divide the possible input range into a set of subranges, and replace
each value with the central value in its subrange. Voila, no fuzzy
comparisons. One way to do this would be to zero out the low two or
three bits in the value's mantissa.
--
Pete
Roundhouse Consulting, Ltd. (www.versatilecoding.com) Author of
"The Standard C++ Library Extensions: a Tutorial and Reference"
(www.petebecker.com/tr1book)
[ See http://www.gotw.ca/resources/clcm.htm for info about ]
You cannot use the standard library methods that
require strict-weak comparisons with a comparator
that says |α - β| ≤ ε → α = β, because equality
must be transitive, while the formula above will
give α - ε = α and α + ε = α but α - ε � α + ε.
This is true no matter what positive value ε is.
If your sensor is producing values accurate to a
certain tolerance, round the values to that tolerance
before comparing. Or just divide the result by the
tolerance and use integers. Or switch to Ada, which
has fixed-point reals with specifiable tolerance.
I think you should use float with standard comparison for the map (to
have a valid order), but should use lower_bound or upper_bound and not
find for querying the map. This will give an iterator, which will allow
for checking close by values.
For instance:
bool has_approximate_value(set<float> const &s, float x, float eps)
{
set<float>::const_iterator it = s.lower_bound(x-eps);
if (it==s.end() || *it > x+eps)
return false;
return true;
}
Geert-Jan Giezeman