Unexpected behavior of promote(Rational, FloatingPoint)

414 views
Skip to first unread message

Tomas Lycken

unread,
Jun 12, 2013, 9:19:09 PM6/12/13
to juli...@googlegroups.com
I'm getting an unexpected result from a call to promote:

julia> promote(1//3, 0.33333333333333333333333333)
(1//3,60047990316061//1801498509481984)
julia> promote(1//3, 3*pi)
(1//3,265289157010665//281474976710656)
while I had expected something like

julia> promote(1//3, 0.33333333333333333333333333)
(0.33333333333333245,0.333333333333333333234) # or whatever - 1/3 to float precision
My expectation seems to be consistent with the example in the manual: 

julia> promote(1, 2.5, 3, 3//4)
(1.0,2.5,3.0,0.75)

and it is also the actual behavior for some values:

julia> promote(1//3, pi)
(0.3333333333333333,3.14159265389793)

although I suspect pi is a special number type in itself...

Is this intended behavior? It seems more like a bug to me - given a rational and a float I would not expect to be able to express both as rationals, in general (and the floating point precision problems for some rationals just make this silly: try for instance promote(1//3, 0.1) in your REPL...).

// Tomas

Gunnar Farnebäck

unread,
Jun 13, 2013, 3:13:36 AM6/13/13
to juli...@googlegroups.com
Den torsdagen den 13:e juni 2013 kl. 03:19:09 UTC+2 skrev Tomas Lycken:
Is this intended behavior? It seems more like a bug to me - given a rational and a float I would not expect to be able to express both as rationals, in general (and the floating point precision problems for some rationals just make this silly: try for instance promote(1//3, 0.1) in your REPL...).

Even worse, it makes it very vulnerable to overflow in the rational arithmetics.

julia> 511//512 * 0.1
Inf

Alessandro "Jake" Andrioni

unread,
Jun 13, 2013, 3:19:31 AM6/13/13
to juli...@googlegroups.com
On 12 June 2013 22:19, Tomas Lycken <tomas....@gmail.com> wrote:
julia> promote(1//3, 0.33333333333333333333333333) (1//3,60047990316061//1801498509481984)

It seems to be a (somewhat) recent change, as in my month-old Julia it works as expected in both cases.

Stefan Karpinski

unread,
Jun 13, 2013, 7:04:34 AM6/13/13
to Julia Dev
This is intentional and is a very recent change (this week). Doing things this way is a logical consequence of the making == transitive, which was done a month ago:


Before that you had 1//10 == 0.1 == 3602879701896397//36028797018963968 although 1//10 != 3602879701896397//36028797018963968, which is obviously bad news. Hopefully we can all agree that it makes sense to have 1//10 != 0.1 to fix this problem, which is what the above change did. Numerically, this is more correct: the floating-point number 0.1 does not represent the rational value 1/10 exactly – it represents the rational value 3602879701896397/36028797018963968.

That change alone left us in an unfortunate situation, however, since we then had 0.1 != 1//10 yet 0.1 - 1//10 == 0. Awkward! The root of the problem was that we had promote(0.1, 1//10) => (0.1,0.1). The solution is to promote (Rational,Float) => Rational instead and convert floats to the exact rational value they represent. In this case that means that we get this:

julia> promote(0.1,1//10)
(3602879701896397//36028797018963968,1//10)

Now we can easily see not only that 0.1 is not exactly 1//10, but also by how much it differs:

julia> 0.1 - 1//10
1//180143985094819840

julia> float(ans)
5.551115123125783e-18

In essence, with this change, instead of mapping rationals to floats in an onto but very not-one-to-one way, we instead map the floats into the rationals in a one-to-one (but not onto) manner. Without this, you can have transitive equality that's consistent with your arithmetic.

Some of the other unexpected behaviors here are bugs, which I'll fix.

Stefan Karpinski

unread,
Jun 13, 2013, 7:05:57 AM6/13/13
to Julia Dev
I also need to update the manual.

Tomas Lycken

unread,
Jun 13, 2013, 7:49:57 AM6/13/13
to juli...@googlegroups.com
OK, that explains a lot.

As a consequence of this change, I guess I'll have to re-write my usage of "promote(x::Real, y::FloatingPoint)..." where x can be either integer or rational (two floats are handled by another method) into something that uses convert instead. How do I make sure that I convert to the correct (i.e. matching) floating point type?

// T

Stefan Karpinski

unread,
Jun 13, 2013, 7:55:39 AM6/13/13
to Julia Dev
On Thu, Jun 13, 2013 at 7:04 AM, Stefan Karpinski <ste...@karpinski.org> wrote:
Without this, you can have transitive equality that's consistent with your arithmetic.

correction: you *can't* have

Stefan Karpinski

unread,
Jun 13, 2013, 7:57:32 AM6/13/13
to Julia Dev
On Thu, Jun 13, 2013 at 7:49 AM, Tomas Lycken <tomas....@gmail.com> wrote:
OK, that explains a lot.

Good, I'm glad the explanation makes sense. Sorry about the disruptive change. I think it's necessary.
 
As a consequence of this change, I guess I'll have to re-write my usage of "promote(x::Real, y::FloatingPoint)..." where x can be either integer or rational (two floats are handled by another method) into something that uses convert instead. How do I make sure that I convert to the correct (i.e. matching) floating point type?

Can you elaborate a bit on the situation? I'm not quite clear on what you need. 

Tomas Lycken

unread,
Jun 13, 2013, 8:15:02 AM6/13/13
to juli...@googlegroups.com
To be honest, although I do understand the rationale behind this change, in terms of "expected output" I think a lot of users are going to be surprised by the behavior of == for cases like 0.1 == 1//10. But I do agree that it's better to be honest with the inaccuracy of floats, and since 1//2 == 0.5 still works (if I correctly understood the code in the pull request you linked) I guess it's all for the better. It'll take some time to get used to, though =)

As for my concrete problem, I have a couple of functions with a method that takes two floats:

    function myfun(x::FloatingPoint, y::FloatingPoint)
        // do some stuff with x and y
    end

and I want to make this function available to combinations of arguments where exactly one of the arguments is a float, and the other is another type of real number. My current solution was to add the following methods

    myfun(x::Real, y::FloatingPoint) = myfun(promote(x,y)...)
    myfun(x::FloatingPoint, y::Real) = myfun(promote(x,y)...)

and let promote handle choosing the correct floating point type etc. This worked as expected until this change was introduced, although I don't see anything in your code that changes promote. (If you want to see the actual code, take a look at the very bottom of https://github.com/tlycken/julia/blob/isapprox/base/floatfuncs.jl)

I would still argue that it makes more sense to have promote(0.1, 1//10) == (0.1, 0.1) than to promote to rationals, so maybe one could change the behavior of the == operator without changing what types promote chooses to return?

// T

Steven G. Johnson

unread,
Jun 13, 2013, 10:45:14 AM6/13/13
to juli...@googlegroups.com
I have to say that I disagree strongly with this change.  It makes Rational a lot less useful, and will lead to (unhappily) surprised users.

From a numerical perspective, rational arithmetic is useless most of the time.  For me, the main value of Rational is to represent the occasional rational constant like 2//3 in a way that *USED* to be automatically promoted to the correct floating-point type with full precision.  e.g. if I write

     f(x) = (2//3) * x

then in the old behavior it would automatically evaluate 2/3 in the correct precision for x.  In the new behavior, if I input a floating-point x, it returns a rational result, which is almost certainly *NOT* what I want.   In fact, it now doesn't even work at all for x::BigFloat (promote gives an error "no method significand").

In the spirit of Scheme (http://www.gnu.org/software/guile/manual/html_node/Exactness.html), it makes more sense to me to think of Rational values as representing "exact" constants, whereas floating-point values are "inexact" in that they may (or may not) contain some roundoff errors, and when an exact value is combined with an inexact value, the result should be inexact.

Numerical computations (in the sense of numerical analysis, i.e. approximating real functions) is the main niche that Julia is targeting, or at least the wedge Julia is using to gain traction as a language, and these kinds of decisions should be justified from that perspective.

Steven

Steven G. Johnson

unread,
Jun 13, 2013, 10:55:40 AM6/13/13
to juli...@googlegroups.com
(Nor do I think numerical users will be bothered by a lack of transitivity, or transitivity only modulo roundoff errors, in the rare case of multiple equality comparisons involving a mixture of floating-point values and other types.  To me, this is way less important than the issue of Rational*Float == Float.)

Steven G. Johnson

unread,
Jun 13, 2013, 11:32:56 AM6/13/13
to juli...@googlegroups.com
This change basically turns Rational into poison for numerical computations.  Any Rational that you introduce will infect everything it touches, turning everything Rational and destroying both performance and accuracy (disastrous overflow is almost certain to occur).

Steven G. Johnson

unread,
Jun 13, 2013, 11:38:02 AM6/13/13
to juli...@googlegroups.com
See also https://github.com/JuliaLang/julia/issues/3378 for an example where the new promotion rules result in 0/0 instead of 0 after propagating through only a single operation.

Steven G. Johnson

unread,
Jun 13, 2013, 11:43:31 AM6/13/13
to juli...@googlegroups.com
Or:

   julia> ((2//3) * e)^2
   -Inf

Another fun behavior involving transcendental functions:

    julia> log(2.0) + log(1//1)
    0.6931471805599453

    julia> log(2.0) + 0//1
    6243314768165359//9007199254740992

Bradley Alpert

unread,
Jun 13, 2013, 12:21:52 PM6/13/13
to juli...@googlegroups.com
I agree with Steven that if the cost of transitivity of == is that rational * float yields rational, then this price is too high.  The latter behavior is not only surprising, but also very unhelpful.  (Perhaps poison is not too strong a word.)  It creates the necessity of fencing rationals away from any typical computation.

Stefan Karpinski

unread,
Jun 13, 2013, 1:12:20 PM6/13/13
to Julia Dev
Ok, maybe this change needs to be reconsidered. In that case I suspect that isequal should be transitive (it has to be for hashing to work), while == will not be transitive. We still need some coherent theory of what == means.

Steven G. Johnson

unread,
Jun 13, 2013, 1:41:08 PM6/13/13
to juli...@googlegroups.com


On Thursday, June 13, 2013 1:12:20 PM UTC-4, Stefan Karpinski wrote:
Ok, maybe this change needs to be reconsidered. In that case I suspect that isequal should be transitive (it has to be for hashing to work), while == will not be transitive. We still need some coherent theory of what == means.

The choice is either:

1) a == b <==> a - b is zero.   In this case 1//10 == 0.1, and equality is not transitive.  (Note that this rule would imply Inf != Inf, since Inf - Inf -> NaN != 0, whereas IEEE 754 has Inf == Inf, I believe.)
2) a == b means that a and b represent the same numeric value.  In this case 1//10 != 0.1 and equality is transitive, but 1//10 - 0.1 == 0.0

I tend to prefer (2).  If you want to check whether a-b is zero, you should write a-b == 0.

Steven G. Johnson

unread,
Jun 13, 2013, 1:50:22 PM6/13/13
to juli...@googlegroups.com
Note that we still have the intransitive == for BigFloat/BigInt comparisons:

julia> i1 = BigInt(10)^1000;

julia> i2 = BigInt("10000000000000000000000000000000000000000000000000000000000000000000000000000040000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000");

julia> i1 == i2
false

julia> i1 == BigFloat(10)^1000
true

julia> i2 == BigFloat(10)^1000
true

Stefan Karpinski

unread,
Jun 13, 2013, 1:53:22 PM6/13/13
to Julia Dev
Ok, in that case it's just the most recent change that needs to be reverted, not the previous change that made == transitive.

Toivo Henningsson

unread,
Jun 13, 2013, 1:57:18 PM6/13/13
to juli...@googlegroups.com
I would also support (2). I would tend to see 1//10 - 0.1 being == 0 or not as roundoff error.

Steven G. Johnson

unread,
Jun 13, 2013, 2:08:27 PM6/13/13
to juli...@googlegroups.com


On Thursday, June 13, 2013 1:41:08 PM UTC-4, Steven G. Johnson wrote:


On Thursday, June 13, 2013 1:12:20 PM UTC-4, Stefan Karpinski wrote:
Ok, maybe this change needs to be reconsidered. In that case I suspect that isequal should be transitive (it has to be for hashing to work), while == will not be transitive. We still need some coherent theory of what == means.

The choice is either:

1) a == b <==> a - b is zero.   In this case 1//10 == 0.1, and equality is not transitive.  (Note that this rule would imply Inf != Inf, since Inf - Inf -> NaN != 0, whereas IEEE 754 has Inf == Inf, I believe.)
2) a == b means that a and b represent the same numeric value.  In this case 1//10 != 0.1 and equality is transitive, but 1//10 - 0.1 == 0.0

I guess a third reasonable choice is:

3) a == b <==> promote(a,b) yields the same numeric value.

This is the old behavior, as I understand it (and is still the behavior for BigInt == BigFloat).  It is intransitive, but (with special exceptions like Inf) means that a == b <==> a - b is zero, and has the big advantage of making == work automatically for arbitrary user-defined numeric types once the promotion rules are defined.

I'm not sure I understand the issue with hashing.  It's not at all clear to me why hash(1) should equal hash(1.0) etcetera -- in general, why should different types hash to the same value, even if they happen to "represent" the same number?    After all, hash(1) != hash("1") even though "1" is a string representation of the number 1.

The function hash(x::FloatingPoint) in dict.jl seems to me rather ugly, as well as being broken for BigFloat: it gives the incongruous hash(big(pi)) == hash(float64(pi)) && big(pi) != float64(pi).

Steven G. Johnson

unread,
Jun 13, 2013, 2:11:06 PM6/13/13
to juli...@googlegroups.com


On Thursday, June 13, 2013 1:53:22 PM UTC-4, Stefan Karpinski wrote:
Ok, in that case it's just the most recent change that needs to be reverted, not the previous change that made == transitive.

Except that your previous change didn't make == transitive.  It only made it transitive for more types than before.   It is still intransitive for BigFloat/BigInt, for example.

Especially given the existence of user-defined numeric types, it may be that the old behavior (3) is the only practical option.

Tomas Lycken

unread,
Jun 13, 2013, 4:50:43 PM6/13/13
to juli...@googlegroups.com
I think 3) is by far the most sound behavior of all choices so far, for a couple of reasons (some of which have been mentioned already, but which I include here for completeness):
  • It is extendable to user-defined types without any extra work - if you write your own number type you still need to define promotion rules for other reasons, and having them in place makes == "just work".

  • It is what the user expects - in almost all other situations when you have two numbers of different types, you expect whatever operation you're doing to promote and work with whatever you get.

  • It doesn't ruin the Rational type for me: as Rationals worked before, I happily used them to represent constants in my formulas, relying on the values to be as exact as possible until a float is introduced, at which point they are gracefully (and without my own intervention!) are converted to floats (of the correct precision!) and just keep going. 
I might not be representative in this, but I personally expect any type paired with a float to promote to a corresponding float, and work with floats - as soon as there are floats there, there is a risk of roundoff errors, and I adjust my programming accordingly (including using abs(a-b) <= tol, or isapprox(a,b), rather than ==).

With this change, I can't really use rationals anywhere, as I'll have to convert them to floats before they come in contact with anything anyway - suddenly, one of Julia's (potentially) really powerful, and quite unique, traits is reduced to a more exact (?) way of defining fractional values, with an enormous cost in readability; I can still do convert(FloatingPoint, 2//3) instead of 0.666666666666666 - and I don't have to think about how many 6's I add, or if the last one should really be a 7... The Rational type would be really useful if I could write expressions like 4//3 * pi * f(x), where f happens to return a float - but now I can't, at lest not if I want the output I expect.

// T

Stefan Karpinski

unread,
Jun 14, 2013, 5:37:39 PM6/14/13
to Julia Dev
https://github.com/JuliaLang/julia/commit/d6bc3935dac15e5f5499e083f32aa8fdead8249e

Doesn't resolve all issues, but goes back to the previous behavior at least. Sorry for the confusion but it's good that we get this sorted out in a well-thought-out manner.

Stefan Karpinski

unread,
Jun 14, 2013, 5:45:38 PM6/14/13
to Julia Dev
I don't consider option #3 to be viable. We've gone to great effort to make sure that Int-Float64 comparisons are correct instead of just doing the lazy promote-to-float and compare thing. The current BigInt, BigFloat behavior is just a bug as far as I'm concerned. The question of whether to promote rationals/float operations to rational is a separate matter and I'm convinced that this should promote to float (and that's now what we do, once again).

Stefan Karpinski

unread,
Jun 14, 2013, 5:48:33 PM6/14/13
to Julia Dev
Issue filed for BigInt/BigFloat comparison: https://github.com/JuliaLang/julia/issues/3399.

Stefan Karpinski

unread,
Jun 14, 2013, 5:56:03 PM6/14/13
to Julia Dev
Interesting question: on the premise that exact + inexact => inexact, since Float32 is *more* inexact than Float64, should promote(::Float32, ::Float64) => Float32? Are we just hallucinating bogus digits when we promote that to Float64? I susecpt that Gunnar would be quite pleased with such a change. However, that would suggest to me that we do the same with Float64, BigFloat and promote these to Float64.

Jeff Bezanson

unread,
Jun 14, 2013, 6:00:41 PM6/14/13
to juli...@googlegroups.com
I think exact/inexact is a binary distinction.

Diego Javier Zea

unread,
Jun 14, 2013, 7:48:25 PM6/14/13
to juli...@googlegroups.com
I don't get exactly what is the meaning of === (isn't in the standard library)

What about a more lazy comparison, like the mathematical \approx \!\, "is approximately equal to" using  ~=   ?

Stefan Karpinski

unread,
Jun 15, 2013, 12:06:41 PM6/15/13
to Julia Dev
`a === b` is another way of writing `is(a,b)`. It implements Henry Baker's EGAL predicate. Having an "approximately equal" operator has been discussed before, but there's already enough going on in this thread.

Stefan Karpinski

unread,
Jun 15, 2013, 12:09:05 PM6/15/13
to Julia Dev
On Fri, Jun 14, 2013 at 6:00 PM, Jeff Bezanson <jeff.b...@gmail.com> wrote:
I think exact/inexact is a binary distinction.

Yes, of course, but some things can be more inexact than others. My point is that the same logic that implies that (inexact,exact) => inexact also seems to argue that (more inexact, less inexact) => more inexact.

Tomas Lycken

unread,
Jun 15, 2013, 1:27:56 PM6/15/13
to juli...@googlegroups.com
@Stefan: Whether promote(x::Float32, y::Float64) should return single or double precision floats is an interesting question, and I'm not convinced in either direction myself. However, I think the most important consideration here is rather what the user expects. What do other languages do? If e.g. C lets 0.3f * 0.3 be double, I think Julia should too. (I've been trying to look in the IEEE standards to see if there's any recommendation there. I think found something at the end of section 10.1 in IEEE 754-2008, but I'm not versed enough in these standard documents to understand it fully...)

@Diego: there is an isapprox(x,y) method which does exactly that kind of comparison - I'll hopefully be able to file a pull request for merging it into base later tonight if all tests pass after I rebase it onto master and get promotion working as I expected it =) There was some discussion on introducing an operator syntax for this, but I don't think it's ready for it yet (and I don't think the discussion belongs in this thread).

// Tomas

Jeff Bezanson

unread,
Jun 15, 2013, 4:23:57 PM6/15/13
to juli...@googlegroups.com
By "binary" I meant either/or, i.e. something is either exact or not;
you cannot be more or less inexact.

Consider for example 1.0f0 * float64(pi). The fact that the 1 is a
Float32 doesn't mean you've lost any precision.

Stefan Karpinski

unread,
Jun 15, 2013, 4:55:47 PM6/15/13
to Julia Dev
Ok, well, I think this subject has been beaten to death for now. Some things were improved (e.g. rational arithmetic is now far susceptible to overflow). Some things were clarified. Many people were probably annoyed. No animals were harmed. Moving on.
Reply all
Reply to author
Forward
0 new messages