If someone writes a Decimal type for Julia, they could also include a @decimal macro that would rewrite float literals inside, so that you would write@decimal begina = 0.1 + 1.1end
Writing this as dec"1.1" seems good to me and pretty self-explanatory to the reader.
Hi Palli,I'm exactly sure I completely understand your question, but I'll do my best to answer them:
1) Yes, I'm sure a decimal type would be useful, and probably a great way to learn about the details and intricacies of floating point. It could all be done in a package, except for the "1.1d + 1.2d" format, which would require modifications to the parser. In the interim, you could also use macro string literals:
d"1.1" + d"1.2"
Another great resource on binary/decimal conversion is Rick Regan's website:
I don't think we would ever want to have float literals as decimal by default: other than the fact that I don't think anyone has tried to compile julia on a Power6 or Sparc64 machine, there are numerical stability reasons for preferring a binary format.
2) In short, no. The original developers of Julia made a very deliberate choice to make integers as close to the concept of "machine integers" as possible, which behave very different from floats at the bit level. So Int and Float64 are most likely going to remain very different types.
More compiler flags will make testing much harder.
I would imagine lots of packages breaking if suddenly all float literal becomes decimal.
On Friday, December 5, 2014 6:31:12 PM UTC, Ivar Nesje wrote:More compiler flags will make testing much harder.
If decimal gives correct results and binary slightly wrong or badly wrong.. then if you can simply rerun with decimal instead of binary you can expose bugs in code?
Decimal floating-point is attractive in cases where human inputs need to be preserved as exactly as possible
while maintaining a large dynamic range and decent performance. However, my understanding is that the "inputs" in such applications generally come from *outside* the code (e.g. from external files, databases, UIs, etc.), in which case the Julia literal format is irrelevant.
So, even in these applications, it's not at all clear to me how much you save by being able to write 1.1 or 1.1d instead of d"1.1" literals in the source code...how many decimal literals would you actually need?
I'm open to the idea of adding 1.1d0 decimal literal syntax.
I'm open to the idea of adding 1.1d0 decimal literal syntax.
I should have been clearer... I didn't mean that the default type for calculations should be arbitrary precision decimal, but rather that numeric *literals* should be kept as such until they actually need to be converted for use...
Integers also - that int-literal thing should go... I think Python 3.4 got this right, at least it got rid of the long type, and all ints (and their literals) are now arbitrary precision...
I was really talking about integer *literals* being arbitrary precision... having an int-literal setting seems like a bad idea...
julia> typeof(1)Int64julia> typeof(18446744073709551616)Int128julia> typeof(340282366920938463463374607431768211456)Base.GMP.BigInt
I could live with a *short* way of denoting a decimal literal, like d"1.45e50", even though it is a pain.The big problem is simply not having Decimal types built-in to the language (even using an outside package like decNumber, as BigFloat also uses a library).
Is this the right place for discussion of it, or is there already a place?
I would like to know:1) what format you are supporting, either the packed format, or the one with binary integers with scale? (the binary form is supposed to be much faster for software implementations)
2) you said that it was 100x slower than IEEE binary floating point, do you have any numbers (on gist maybe? where I've been told to put such things)
4) what sorts of exceptions are thrown? (or can you get the status after an operation somehow)?
5) How are things like rounding formats set?