# precise numbers

46 views

### cej38

Oct 12, 2010, 12:17:36 PM10/12/10
to Clojure
I keep running into this type of problem:

user=> (- 12.305 12.3049)
9.999999999976694E-5

The computer (probably the JVM) has just lied to me. Any fourth grade
student will know that this does not equal 0.0001. This would be less
of a problem is the JVM was consistent; if it were consistent then the
following equality would be true:

user=> (= 0.0001 (- 12.305 12.3049))
false

Now it has lied to me again!

I need a method to reliably compare two numbers. My first choice
would be to have the JVM compute (- 12.305 12.3049) correctly.
Barring that, I need a way to, without fail, compute the truthfulness
of the following: =, <, >, <=, >=.

### Felix H. Dahlke

Oct 12, 2010, 12:24:39 PM10/12/10
You could use BigInteger, which was created to work around double's
rounding issues - among other things.

(- 12.305M 12.3049M)
0.0001M

signature.asc

### David Nolen

Oct 12, 2010, 12:31:45 PM10/12/10
On Tue, Oct 12, 2010 at 12:17 PM, cej38 wrote:
I keep running into this type of problem:

user=> (- 12.305 12.3049)
9.999999999976694E-5

### Alan

Oct 12, 2010, 12:33:25 PM10/12/10
to Clojure
The JVM has no choice: it must faithfully implement the IEEE floating-
point spec (http://en.wikipedia.org/wiki/IEEE_754-2008), which
specifies limited precision. By asking it to use floats, you are
demanding that it accept rounding errors. If you want precision, there
are lots of ways to get it; the most common is to cast everything up
to integers and perform the computation there, then convert back down
later:

user=> (def numbers [12.305 12.3049])
#'user/numbers
user=> (/ (apply - (map #(int (* 10000 %)) numbers)) 10000)
1/10000

Or you can tell Java to use BigDecimals, which have arbitrary
precision:
user=> (- 12.305M 12.3049M)
0.0001M

You could use clojure's excellent ratio type instead:
user=> (def nums (map #(/ % 10000) [123050 123049]))
#'user/nums
user=> nums
(2461/200 123049/10000)
user=> (apply - nums)
1/10000

Computers don't lie; they just do what you tell them to.

### David Sletten

Oct 12, 2010, 12:50:06 PM10/12/10
This discussion may help:
http://www.gettingclojure.com/cookbook:numbers#comparing-floats

Have all good days,
David Sletten

> --
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clo...@googlegroups.com
> Note that posts from new members are moderated - please be patient with your first post.
> To unsubscribe from this group, send email to
> For more options, visit this group at

### Alan

Oct 12, 2010, 1:10:49 PM10/12/10
to Clojure
Also see (rationalize) to simplify my example of using ratios. I
couldn't remember the name of the function off the top of my head, so
I used a hacked-up-by-me version.

### Felix H. Dahlke

Oct 12, 2010, 1:25:14 PM10/12/10
Um, I meant BigDecimal, not BigInteger.
signature.asc

### ataggart

Oct 12, 2010, 1:49:15 PM10/12/10
to Clojure
Welcome to floating point math.

As an alternative, try using arbitrary-precision numerics:
user=> (- 12.305M 12.3049M)
0.0001M
user=> (type *1)
java.math.BigDecimal

On Oct 12, 9:17 am, cej38 <junkerme...@gmail.com> wrote:

### Nicolas Oury

Oct 12, 2010, 1:39:47 PM10/12/10
If you want to be really precise, most real numbers are an infinite
number of decimals.
If you encode them as a lazy seq of decimals, + - and other ops are doable.

Comparison is semi-decidable only: it terminates only in certain case
(finite number of decimals)
or when the number are different.

big-decimal or fractions are a good approximation, though.

### cej38

Oct 12, 2010, 2:53:16 PM10/12/10
to Clojure
On Oct 12, 12:50 pm, David Sletten <da...@bosatsu.net> wrote:
> This discussion may help:http://www.gettingclojure.com/cookbook:numbers#comparing-floats

I originally tried something like float= described in the link, I give
the definition here

(defn float=
([x y] (float= x y 0.00001))
([x y epsilon]
(let [scale (if (or (zero? x) (zero? y)) 1 (Math/abs x))]
(<= (Math/abs (- x y)) (* scale epsilon)))) )

And the truth-table that was given with the function definition:

(float= 0.01 0.0) => false
(float= 0.001 0.0) => false
(float= 0.0001 0.0) => false
(float= 0.00001 0.0) => true

And this works for the problem that I discussed before
user=> (float= 0.0001 (- 12.305 12.3049))
true

But I can come up with a use case where it fails:

user=> (float= 12.3049 12.305)
true

The problem is that the IEEE specification does a great job of
comparing floats, but does a crap job of doing simple math on them.

### Luka Stojanovic

Oct 12, 2010, 3:14:02 PM10/12/10

and again it does exactly what it promises to do:

epsilon is maybe wrong-named here because it is not absolute value, but
amount of significant figures. it is multiplied by x to get precision. so

|12.3049 - 12.305| <= 12.3049 * 0.00001 which holds true

### Angel Java Lopez

Oct 12, 2010, 3:34:42 PM10/12/10
Short comment:

I remember Common Lisp has rational numbers (I'm not sure). Any rational number library for Clojure?

Angel "Java" Lopez
http://www.ajlopez.com

### cej38

Oct 12, 2010, 3:35:24 PM10/12/10
to Clojure
The more that I think about it, the more I would rather have a set of
equalities that always work. float= was a good try.

### David Sletten

Oct 12, 2010, 3:44:36 PM10/12/10
I suggest you read the article a bit more closely. Here are the details of your specific case.

The number that you are typing as 12.305 is actually being stored in the computer as this:
1100.0100111000010100011110101110000101000111101011100

This number is actually this fraction:
1731774794212311/140737488355328

So here is the true value in decimal:
12.304999999999999715782905695959

It is not possible to represent 12.305 any closer than that on conventional hardware.

1100.0100111000001101111011010010100010001100111001110

Which is this fraction:
3463521440926951/281474976710656

Or in decimal:
12.304899999999999948840923025272

This is the best approximation for 12.3049.

When you scale epsilon (0.00001) by about 12, you can see that these two numbers are "equal" as expected in the sense defined by float=.

Have all good days,
David Sletten

### Mike Meyer

Oct 12, 2010, 4:28:08 PM10/12/10

Then you can't use floats.

As others have explained, floats are imprecise by nature, being
limited to finite binary fractions. To make matters worse, you don't
input the numbers in binary, but in decimal, which means most of the
fractions you can input can't be represented as a float - so you get
an approximation. Given that the numbers are "fuzzy", the concept of
equality also becomes "fuzzy" - whether two numbers are equal will
depend on the context. So you have to choose the equality that's
appropriate for the context.

If you want precise numbers, Clojure has three options:

1) If you can represent everything as integers, then BigInteger is
probably the easiest to use, with the obvious drawback that it can't
handle fractional values, nor can it represent as large a value as a
float since you run out of memory. Letting units represent 1/1000's:

user> (- 123050 123049)
1

2) If you're going to stay in the world of decimal fractions, use
BigDecimal. This has some of the problems of floats, but between
allowing arbitrary precision and the input representation matching the
internal representation, they're not nearly as obnoxious.

user> (- 12.305M 12.3049M)
0.0001M

3) Clojure's rationals let you represent all rational values, until
you run out of memory. If you want a decimal approximation after the
calculation is done, that's easy to get as well:

user> (- 12305/1000 123049/10000)
1/10000
user> (float (- 12305/1000 123049/10000))
1.0E-4

<mike
--
Mike Meyer <m...@mired.org> http://www.mired.org/consulting.html

O< ascii ribbon campaign - stop html mail - www.asciiribbon.org

### Brian Hurt

Oct 12, 2010, 5:44:20 PM10/12/10
On Tue, Oct 12, 2010 at 3:35 PM, cej38 wrote:
The more that I think about it, the more I would rather have a set of
equalities that always work.  float= was a good try.

<RANT>

Every fucking language I've ever worked on has had this problem- "floats are broken!"  And every single one, people keep coming up with the same wrong answers to this.  C, Java, Ocaml, Haskell, SQL, now Clojure.  Makes me wonder what language you are coming from where floats *aren't* broken.  Some languages patch their print methods to hide the errors in the simple cases, but I've yet to see one where 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 + 0.1 = 1.0.  Try it in the language of your choice.

First of all, the base of the floating point number just changes what fractions "break".  For example, in base 10, 1/3 * 3 = 0.99999...  So no, going to base-10 doesn't save you.  IEEE 754 floats are base-2, so 1/10th is impossible to represent precisely, the same way 1/3 is impossible to represent precisely in base-10.  Oh, and Clojure has, from it's Java roots, a BigDecimal class which does do base-10 arithmetic.

And no, going to rationals doesn't save you either.  Take the vector [1, 0, 1], and normalize it so it's euclidean length is 1, without error.  Have a nice day.  *Any* finite precision representation is going to have round off error.  Oh, and Clojure does have a rational class.

And once you have round-off, you're going to have numeric errors.  At which point, how you deal with those errors is important.  Numeric algorithms fall into two broad categories- stable and unstable.  With stable algorithms, errors tend to cancel each other, so your final answer is going to be pretty close to correct.  With unstable algorithms, errors tend to accumulate (I'm simplifying here for the newbies, for those who know what I'm talking about).

No, throwing more bits at the problem won't save an unstable algorithm, it'll just take longer for the unstable algorithm to finally destroy any and all accuracy.  Double precision numbers give you enough precision to measure the distance from here to the moon- in multiples of the wavelength of red light.  Precisely.  Also note there is a difference between precision and accuracy- if I say pi is equal to 3.179830482027405068272948472, that's very precise- but not very accurate.  Unstable algorithms destroy accuracy, not precision.

And no, ranged floats don't help either- as they grossly overestimate the error of stable algorithms.  A classic example of a super-stable algorithm is Newton's method- generally, the number of bits of precision is doubled (or more) every iteration, as the algorithm converges to the answer. But ranged floats have the error increasing every iteration.

Oh, and double precision floating point numbers (especially if they're unboxed) are between 10x and 1000x as fast as other representations- thanks to hardware acceleration.  Your CPU can execute one floating point operation a clock cycle, two if it's vectorized code.

Floating point is not broken.

</RANT>

Brian

### Mike Meyer

Oct 12, 2010, 6:23:48 PM10/12/10
On Tue, 12 Oct 2010 17:44:20 -0400
Brian Hurt <bhu...@gmail.com> wrote:

> On Tue, Oct 12, 2010 at 3:35 PM, cej38 <junke...@gmail.com> wrote:
>
> > The more that I think about it, the more I would rather have a set of
> > equalities that always work. float= was a good try.
> <RANT>

Maybe initially, but not later on...

> Floating point is not broken.
> </RANT>

Well, no more broken than any other representation of numbers on the
computer. All of them have problems of one sort or another. For
almost any problem you're going to solve with a computer, picking the
right representation for your data is a crucial step!

While you may understand the mathematics of numbers quite well, no
computerized representation will give you that behavior (even integers
are usually weird, either having a number which has no additive
inverse, or having two zeros). You can expect every mathematical
entity to have this problem: pretty much every computer representation
will not have some property or properties of the original, so you have
to figure out which properties you really care about, and pick the
representation that has those properties.

That picking the right representation of numbers is a crucial step may
be surprising, but it's still true. Most languages have settled on the
same set of choices (in order of decreasing use): integers, either
with or without size limits; floats; decimals, either with or without
explicit lengths; and rationals. You can expect that most problems can
be dealt with adequately by one of these. If not - well, there are
lots of other choices out there that you can play with as well.

### Nicolas Oury

Oct 13, 2010, 2:37:48 PM10/13/10
On Tue, Oct 12, 2010 at 8:35 PM, cej38 <junke...@gmail.com> wrote:
> The more that I think about it, the more I would rather have a set of
> equalities that always work.  float= was a good try.
>
>

The only way to do so is to have numbers with infinite precision.

For example as lazy-seq of their digits.

But:
- it is slow
- equality is semi-decidable only.

### Felix H. Dahlke

Oct 13, 2010, 4:17:07 PM10/13/10
Nice rant, I learned something here :) I somehow thought BigDecimal
wouldn't have any precision issues, but I probably never noticed because
I can only think in base-10 arithmetic. Has been good enough for my
humble precision problems so far though, never had a notable performance
issue with it either.

signature.asc

### David Sletten

Oct 13, 2010, 4:28:40 PM10/13/10

On Oct 12, 2010, at 5:44 PM, Brian Hurt wrote:

> For example, in base 10, 1/3 * 3 = 0.99999...

It may seem counterintuitive, but that statement is perfectly true.
1 = 0.9999...

That's a good test of how well you understand infinity.

Of course, the problem arises when we truncate the string of 9's, which we must invariably do in a computer-based representation.

### Jarl Haggerty

Oct 13, 2010, 5:02:00 PM10/13/10
to Clojure
A slight tangent, is there anyway to represent exact square roots?
I'd like to have an exact closed form fibonacci function.

### Alan

Oct 13, 2010, 5:12:26 PM10/13/10
to Clojure
Um, not if you're going to try and turn them into numbers first. But
if you store them as ['sqrt 5] or some similar operator/operand pair,
then you can manipulate them symbolically and wait for them to cancel
out, instead of evaluating them. This requires implementing some kind
of symbolic-algebra system, but it's certainly possible: when I was in
college my TI-89 could compute the closed form for arbitrarily large N
without losing precision (or accuracy).

### Felix H. Dahlke

Oct 13, 2010, 6:27:39 PM10/13/10
On 13/10/10 22:28, David Sletten wrote:
>
> On Oct 12, 2010, at 5:44 PM, Brian Hurt wrote:
>
>> For example, in base 10, 1/3 * 3 = 0.99999...
>
> It may seem counterintuitive, but that statement is perfectly true.
> 1 = 0.9999...
>
> That's a good test of how well you understand infinity.

I'm clearly not a mathematician, but doesn't 0.99999... asymptotically
approach 1, i.e. never reaching it? How is that the same as 1?

signature.asc

### Matt Fowles

Oct 13, 2010, 6:36:18 PM10/13/10
Felix~

You are correct that the sequence of numbers

0.9
0.99
0.999
...

asymptotically approaches 1; however, the number 0.9999... (with an infinite number of 9s) is equal to 1.  The formal proof of this is fairly tricky as the definition of the real number is usually done as an equivalence class of Cauchy sequences; a simplified version of the proof can be thought of as follows:

For any two real numbers a and b there exists an infinite number of real numbers c such that a < c < b.  However, there do not exist any numbers between 0.99999... and 1, thus they must be same number.

As it turns out, it took mathematicians a long time to nail down formally exactly what we naively think of as "numbers".

Matt

### David Sletten

Oct 13, 2010, 6:56:21 PM10/13/10
Here's a slightly more informal argument. Suppose you challenge me that 1 is not equal to 0.9999... What you are saying is that 1 - 0.9999... is not equal to 0, i.e., the difference is more than 0. But for any positive value arbitrarily close to 0 I can show that 0.999... is closer to 1 than that. If you were to say that the difference is 0.1, I could show that 0.999... > 0.9 so the difference is smaller. For every 0 you added to your challenge: 0.1, 0.01, 0.001 I could provide a counterexample with another 9: 0.9, 0.99, 0.999, ... In other words, there is no positive number that satisfies your claim, so equality must hold.

Have all good days,
David Sletten

### Mike Meyer

Oct 13, 2010, 6:58:12 PM10/13/10
On Thu, 14 Oct 2010 00:27:39 +0200

"Felix H. Dahlke" <f...@ubercode.de> wrote:

This representation implies the sum of the series (iterate #(/ % 10)
9/10), and that sum behaves as you say. However, since the series is
infinite, you can prove that the number it represents is actually
equal to one, like so:

1) a = 0.999... # define a as 0.999...
2) 10a = 9.999... # multiply both sides by 10
3) 10a - a = 9.999... - 0.999... # subtract equation 1 from equation 2
4) 9a = 9 # simplify
5) a = 1 # divide both sides by 9.

The subtraction step doesn't work unless the sequence is infinite, if
a is any finite sequence of 9s, you'll get a number whose decimal
representation is matches the re 8\.(9)*1. This relies on the property
that adding 1 to an infinite number gives you back the same infinite
number, so that:

(= (rest (map #(* % 10) (iterate #(/ % 10) 9/10)))
(iterate #(/ % 10) 9/10))

is true, but I don't recommend typing that into a repl to check it!

Hmm. I wonder if you could represent irrationals as lazy sequences,
and do arithmetic on those?

### Terrance Davis

Oct 13, 2010, 7:04:07 PM10/13/10
The quick and dirty proof (not formal proof) for 1 = .9999...

1/3 = .333333...
2/3 = .666666...
3/3 = .999999...

Think of 3/3 as 1/3 (that is .33333...) times 3.

-Terrance Davis
www.terrancedavis.com

### Felix H. Dahlke

Oct 14, 2010, 2:43:53 AM10/14/10
OKay, I think I get it. Thanks :)

>> For any two real numbers /a/ and /b/ there exists an infinite number
>> of real numbers /c /such that /a < c < b/. However, there do not

>> exist any numbers between 0.99999... and 1, thus they must be same number.
>>
>> As it turns out, it took mathematicians a long time to nail down
>> formally exactly what we naively think of as "numbers".
>>
>> Matt
>>
>> On Wed, Oct 13, 2010 at 6:27 PM, Felix H. Dahlke <f...@ubercode.de
>> <mailto:f...@ubercode.de>> wrote:
>>
>> On 13/10/10 22:28, David Sletten wrote:
>> >
>> > On Oct 12, 2010, at 5:44 PM, Brian Hurt wrote:
>> >
>> >> For example, in base 10, 1/3 * 3 = 0.99999...
>> >
>> > It may seem counterintuitive, but that statement is perfectly true.
>> > 1 = 0.9999...
>> >
>> > That's a good test of how well you understand infinity.
>>
>> I'm clearly not a mathematician, but doesn't 0.99999... asymptotically
>> approach 1, i.e. never reaching it? How is that the same as 1?
>>
>>
>>
>> --
>> You received this message because you are subscribed to the Google
>> Groups "Clojure" group.
>> To post to this group, send email to clo...@googlegroups.com

>> Note that posts from new members are moderated - please be patient
>> To unsubscribe from this group, send email to

signature.asc

### Nicolas Oury

Oct 14, 2010, 5:06:54 AM10/14/10
Another proof:

Let study the sequence sn = 0.9999...9 , with n 9s.

Or s0= 0 and s(n+1) = sn + 9 / 10 ^n

lim sn = 0.99999.......
and lim sn = 1.

so ....
If I remember my meth correctly,
the number 0.9999...... does not exist.

This not a legal decimal sequence.
(Any decimal sequence finishing by 9999...... is forbidden to allow a
one to one mapping between real numbers and
decimal sequence.)

This kind of infinity is one of the reason equality is not devidable
on real numbers.

You can manipulate square root directly. For example by defining
numbers as a map from ratio to ratio.sqr
sqrt(5) is represented by {5 1}
sqrt(5) + 3. sqrt(2) by {5 1 , 2 3}
15 + sqrt(3) ----> {1 15, 3 1}

adding is just a reduce of one map into the other.
neg is a map.

multiplying is more complex. For each two pairs in the map
[a b] [c d], you check wether (ac) can be written as sqr(e).f
(For example 2 * 6 can be written as sqr(2)*3)
If it is the case, you return [f (* e b d)]
else you return [(* a c) (* b d)]

Dividing is more difficult.

I don't know if it is the most efficien way to do that, but it is the
easiest to code.

Best,

Nicolas.

### Julian

Oct 14, 2010, 7:13:23 AM10/14/10
to Clojure
For the sake of comparison with other LISPs - some Scheme
implementations have an exact? function for dealing with non-precise
numbers.

This seems not to have come across to Clojure. (Although we do have
ratios and BigInteger).

JG

### cej38

Oct 14, 2010, 12:07:31 PM10/14/10
to Clojure
I am kinda sorry that I started this whole thing. I don't need
another lesson in limits. The simple fact of the matter is that, in
my code, I run into a place where I have a comparison (= some-value
(some-function some-data)), the function, data, and value can change.
In a use case that I am interested in, I run into the problem stated,

user=> (= 0.0001 (- 12.305 12.3049))
false

I am OK with replacing the = function with something like float=
discussed above, but whatever I change it two needs to work. If
anyone has found a way, that reliably works, please post it here.
Further, <, >, <=, and >= would also be appreciated. Thank you.

### Brian Hurt

Oct 14, 2010, 12:26:37 PM10/14/10
On Thu, Oct 14, 2010 at 12:07 PM, cej38 wrote:
I am kinda sorry that I started this whole thing.  I don't need
another lesson in limits.  The simple fact of the matter is that, in
my code, I run into a place where I have a comparison (= some-value
(some-function some-data)), the function, data, and value can change.
In a use case that I am interested in, I run into the problem stated,

user=> (= 0.0001 (- 12.305 12.3049))
false

I am OK with replacing the = function with something like float=
discussed above, but whatever I change it two needs to work.  If
anyone has found a way, that reliably works, please post it here.
Further, <, >, <=, and >= would also be appreciated.  Thank you.

Comparing two floats for equality is almost always wrong.  What you want to do is check whether their difference is less than some small number.  To compare two floating point numbers x and y, you want to do (<= (Math/abs (- x y)) epsilon), where epsilon is some suitably small number (based on the size of the error of the computation).

Brian

### David Sletten

Oct 14, 2010, 5:07:43 PM10/14/10

On Oct 14, 2010, at 12:07 PM, cej38 wrote:

> I am kinda sorry that I started this whole thing. I don't need
> another lesson in limits. The simple fact of the matter is that, in
> my code, I run into a place where I have a comparison (= some-value
> (some-function some-data)), the function, data, and value can change.
> In a use case that I am interested in, I run into the problem stated,
>
> user=> (= 0.0001 (- 12.305 12.3049))
> false
>
> I am OK with replacing the = function with something like float=
> discussed above, but whatever I change it two needs to work. If
> anyone has found a way, that reliably works, please post it here.
> Further, <, >, <=, and >= would also be appreciated. Thank you.
>
>

If you define 2 of them, then you get the rest for free. We already have float=:

(defn float=
([x y] (float= x y 0.00001))
([x y epsilon]
(let [scale (if (or (zero? x) (zero? y)) 1 (Math/abs x))]
(<= (Math/abs (- x y)) (* scale epsilon)))) )

If x < y, then in our case x should be more than epsilon below y:
(defn float<
([x y] (float< x y 0.00001))

([x y epsilon]
(let [scale (if (or (zero? x) (zero? y)) 1 (Math/abs x))]

(< x (- y (* scale epsilon)))) ))

(defn float<=
([x y] (or (float< x y) (float= x y)))
([x y epsilon] (or (float< x y epsilon) (float= x y epsilon))))

(defn float>
([x y] (not (float<= x y)))
([x y epsilon] (not (float<= x y epsilon))))

(defn float>=
([x y] (or (float> x y) (float= x y)))
([x y epsilon] (or (float> x y epsilon) (float= x y epsilon))))

Then you determine how strict epsilon needs to be:
(float< 12.3049 12.305) => false
(float< 12.3049 12.305 1e-6) => true
(float<= 12.305 12.3049) => true
(float<= 12.305 12.3049 1e-6) => false
(float> 12.305 12.3049 1e-6) => true

### Steven E. Harris

Oct 16, 2010, 7:05:36 PM10/16/10
cej38 <junke...@gmail.com> writes:

> (defn float=
> ([x y] (float= x y 0.00001))
> ([x y epsilon]
> (let [scale (if (or (zero? x) (zero? y)) 1 (Math/abs x))]
> (<= (Math/abs (- x y)) (* scale epsilon)))) )

You're scaling epsilon incorrectly here. Epsilon defines the smallest
value that yields a value greater than one when added to one. If you're
not using it along with one, you're using it incorrectly.

What you need to do is scale epsilon by having its exponent match the
value with which you want to use it, while maintaining its mantissa. The
C library offers functions ldexp()¹ and frexp()² to split and scale the
exponent of a floating point number, respectively; Common Lisp offers
DECODE-FLOAT³ and SCALE-FLOAT for the same purpose.

I don't know of any standard functions in Java -- or Clojure -- that
allow one to destructure a floating point number like this. It's
possible to use Float#floatToRawIntBits(), an understanding of IEEE 754,
and tweezers to get there.

If you assume you have a function called `scaled-epsilon' that accepts
the exemplar value with which you intend to use epsilon,

(defn scaled-epsilon [n] ...)

you can use the following function `same?' to compare your floating
point values, assuming they're nonnegative:

,----
| (defn same?
| [m n]
| (if (< n m)
| (sufficiently-close? n m)
| (sufficiently-close? m n)))
|
| (defn- sufficiently-close?
| [smaller larger]
| (or (zero? larger)
| (if (zero? smaller)
| (< larger (scaled-epsilon 0.0)))
| (zero? (- 1 (/ smaller larger)))))
`----

Note too that scaling epsilon must take into account whether you intend
to add it or subtract it from some companion number. It's common to use
the word "epsilon" to mean some fudge factor without considering what
the value really means for a floating point number. Indeed, using it
properly in Java is still too difficult.

--
Steven E. Harris

### Matt Fowles

Oct 16, 2010, 7:11:46 PM10/16/10
All~

A good "close enough" comparison is actually very difficult.  I adapted one based on

which provides a pretty good example and handles all the cases that I could find.

public static boolean compareDoubles(double tolerance, double d1, double d2) {
// For a discussion and analysis of this technique, check out:

// handle 0.0 specifically, as it will screw with our division
if (d1 == 0.0 || d2 == 0.0) { return Math.abs(d1 - d2) < tolerance; }

// catch overflow cases, we will divide by d1 which could increase the value of d2
// but use absolute value to handle all the different permutations of signs
double d1abs = Math.abs(d1);
double d2abs = Math.abs(d2);
if ((d1abs < 1.0) && (d2abs > d1abs*Double.MAX_VALUE)) { return false; }

// comparing that they have a ratio near 1.0 handles both very large
// and very small numbers cleanly with a single tolerance
double ratio = d2 / d1;
return ((1-tolerance) < ratio && ratio < (1+tolerance));
}

I use a default tolerance of 1e-6, which works fairly well.

Enjoy,
Matt

### David Sletten

Oct 16, 2010, 10:51:11 PM10/16/10
Steven,

Thanks for your comments. You bring up some interesting points, however, you also raise some more questions.

First, you criticize my use of the variable name 'epsilon'. Of course, this usage is entirely consistent with its ubiquitous use in mathematics. I am designating a(n) (arbitrarily) small positive number used as a bound on the difference of x and y. I am well aware of the more restricted use that you are emphasizing:
* (apropos-list 'epsilon)

(DOUBLE-FLOAT-EPSILON DOUBLE-FLOAT-NEGATIVE-EPSILON LONG-FLOAT-EPSILON
LONG-FLOAT-NEGATIVE-EPSILON SHORT-FLOAT-EPSILON SHORT-FLOAT-NEGATIVE-EPSILON
SINGLE-FLOAT-EPSILON SINGLE-FLOAT-NEGATIVE-EPSILON EPSILON)

In fact, two articles referenced in this thread also use 'epsilon' in exactly the way I have:

But if you would prefer to use the word 'tolerance' or some such term and reserve 'epsilon' (more properly 'machine epsilon') for the concept you have in mind, then I'll go along with that (and I won't even complain about you using the word 'mantissa' :-) ).

More significantly, your discussion of scaling epsilon (your meaning, I presume) got me thinking. Is it strange that we are using a decimal fraction for our tolerance? Should we be thinking in terms of a binary fraction, i.e., rather than 1/100000 we use 1/131072. In other words (Math/pow 2 -17) rather than (Math/pow 10 -5):
(Math/pow 10 -5) => 9.999999999999999E-6
(Math/pow 2 -17) => 7.62939453125E-6

Or are you saying that we should specifically be scaling (machine) epsilon rather than some arbitrary tolerance? As you mentioned this is not as easy in Clojure/Java as in Common Lisp.

Could we define 'scaled-epsilon' something like this?
(defn log2 [x]
(/ (Math/log x) (Math/log 2)))
(def epsilon
(loop [eps 1.0]
(if (> (+ eps 1.0) 1.0)
(recur (* eps 0.5))
eps)))
(defn scaled-epsilon [x]
(* epsilon (Math/pow 2 (Math/floor (log2 x)))) )

And finally, what is your response to the OP? You define 'same?' but place several caveats on its use. Are you warning that there is no general-purpose solution?

Have all good days,
David Sletten