This is the implementation of quot and rem for doubles:
static public double quotient(double n, double d){
if(d == 0)
throw new ArithmeticException("Divide by zero");
double q = n / d;
if(q <= Long.MAX_VALUE && q >= Long.MIN_VALUE)
{
return (double)(long) q;
}
else
{ //bigint quotient
return new BigDecimal(q).toBigInteger().doubleValue();
}
}
static public double remainder(double n, double d){
if(d == 0)
throw new ArithmeticException("Divide by zero");
double q = n / d;
if(q <= Long.MAX_VALUE && q >= Long.MIN_VALUE)
{
return (n - ((long) q) * d);
}
else
{ //bigint quotient
Number bq = new BigDecimal(q).toBigInteger();
return (n - bq.doubleValue() * d);
}
}
In both cases, results of primitive division with large magnitude (greater than a long) are cast to bigdecimal, then truncated by casting to biginteger, then cast back to double. Why is this done instead of just truncating the double directly (e.g. with Math/floor or Math/ceil)? Ideas I had:
- Some subtlety of double rounding. Maybe if the quotient's significand exceeds 53 bits, somehow you can get a better approximation? This doesn't seem possible, though, since the bits are already gone at q, and if there were regained by bq they are gone again when cast to double.
- Enforce with-precision settings for rounding. But this isn't done for smaller magnitude, so anyone relying on this behavior will not get expected results anyway.
- Guarantee that a q of +/- Infinity or NaN will throw. Why not just check directly?