BigDecimal precision explosion

92 views
Skip to first unread message

David Simmons-Duffin

unread,
Dec 31, 2012, 10:10:25 AM12/31/12
to scala...@googlegroups.com
I'm finding a difference in the behavior of BigDecimals in 2.10.0-RC5 vs 2.9.2.  Specifically, computations in 2.10.0-RC5 can lead to explosions in precision.  For example, consider the following computation, done with the default MathContext (which has precision 34), in each version


Welcome to Scala version 2.9.2 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_07).
Type in expressions to have them evaluated.
Type :help for more information.

scala> import math.BigDecimal
import math.BigDecimal

scala> val z40 = BigDecimal(0).setScale(40)
z40: scala.math.BigDecimal = 0E-40

scala> (1+ z40*z40).precision
res0: Int = 34


vs


Welcome to Scala version 2.10.0-RC5 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_07).
Type in expressions to have them evaluated.
Type :help for more information.

scala> import math.BigDecimal
import math.BigDecimal

scala> val z40 = BigDecimal(0).setScale(40)
z40: scala.math.BigDecimal = 0E-40

scala> (1+z40*z40).precision
res0: Int = 81


(I am not actually using setScale in my code -- rather numbers with large scales are being generated in computations, and these scales get amplified through the above mechanism, resulting in ever-growing numbers and an ever-slowing program.)

The old behavior seems correct to me.  (It can be recovered by doing something like `(1+z40*z40)(z40.mc)`, but this is somewhat ugly, and it would be nice to have the correct MathContext applied at each stage in a computation.)  Was this change intentional?  If so, what was the reasoning behind it, and what is the best way to recover the old behavior?

Erik Osheim

unread,
Dec 31, 2012, 10:51:53 AM12/31/12
to David Simmons-Duffin, scala...@googlegroups.com
On Mon, Dec 31, 2012 at 07:10:25AM -0800, David Simmons-Duffin wrote:
> The old behavior seems correct to me. (It can be recovered by doing
> something like `(1+z40*z40)(z40.mc)`, but this is somewhat ugly, and it
> would be nice to have the correct MathContext applied at each stage in a
> computation.) Was this change intentional? If so, what was the reasoning
> behind it, and what is the best way to recover the old behavior?

So, scala.math.BigDecimal is a bit of a fiasco. I agree that in this
case it'd be nice to have the 2.9 behavior, but the 2.9 behavior itself
broke other things (which worked in 2.8). As Paul mentioned on the
ticket, under 2.9 you get behavior like this:

Welcome to Scala version 2.9.2 (Java HotSpot(TM) 64-Bit Server VM, Java 1.6.0_37).
Type in expressions to have them evaluated.
Type :help for more information.

scala> BigDecimal("123895823953295832958329582392395923573925") % BigDecimal("3")
java.lang.ArithmeticException: Division impossible

So, the behavior was reverted to the 2.8 case to avoid these kinds of
crashes. I agree that the explosion of precision is bad, but I don't
think it's worth trading for these kinds of runtime errors.

I don't think many people would disagree that BigDecimal (and possibly
BigInt) need a major overhaul to be useful, non-buggy, first-class
members of the Scala ecosystem. I haven't had the time to try to sit
down and come up with a good overall design for it (yet) but would love
to collaborate on this if it's something you're interested in.

-- Erik

Rex Kerr

unread,
Dec 31, 2012, 3:47:22 PM12/31/12
to Erik Osheim, David Simmons-Duffin, scala-user
The right way to do this is to jettison the Java classes and to implement from scratch an arbitrary precision library of high quality (or wrap something else, if a good alternative can be found; I always end up using GMP not with a JVM language when I need arbitrary precision math).

This is a nontrivial amount of work; probably two or three months solid for someone talented who has the appropriate background.

  --Rex

Erik Osheim

unread,
Dec 31, 2012, 3:52:30 PM12/31/12
to Rex Kerr, David Simmons-Duffin, scala-user
On Mon, Dec 31, 2012 at 03:47:22PM -0500, Rex Kerr wrote:
> The right way to do this is to jettison the Java classes and to implement
> from scratch an arbitrary precision library of high quality (or wrap
> something else, if a good alternative can be found; I always end up using
> GMP not with a JVM language when I need arbitrary precision math).
>
> This is a nontrivial amount of work; probably two or three months solid for
> someone talented who has the appropriate background.

Agreed.

Tom Switzer and I are always talking about working on something like
this for Spire. Scala's current design seems to be pretty seriously
broken: it inherits a lot of problems from Java, and Scala's embedding
of a MathContext in each instance certainly doesn't help things.

But you're right that it's a ton of work. In the meantime I suppose we
will continue to try to do the "least-worst thing possible" with
scala.math.BigDecimal. ;)

-- Erik

David Simmons-Duffin

unread,
Dec 31, 2012, 4:04:06 PM12/31/12
to scala...@googlegroups.com, Rex Kerr, David Simmons-Duffin, er...@plastic-idolatry.com
I see what you mean about problems from Java -- after switching back to the 2.9.2 behavior for BigDecimals, I started to get arithmetic exceptions.  It turns out the issue is that, e.g.

-0.7567436875190268332352103372698674 * 0E-2147483647 // Underflow exception
0E-2147483647 * -0.7567436875190268332352103372698674 // Behaves correctly

--David
Reply all
Reply to author
Forward
0 new messages