Switchable literal types - ways to make Julia slower :) [or faster/safer..?]

727 views
Skip to first unread message

Páll Haraldsson

unread,
Dec 5, 2014, 12:12:54 PM12/5/14
to juli...@googlegroups.com

Hi,

My first post on this side. Please be kind. :)

I've been thinking 1) about implementing decimal floating point in Julia. And 2) how Julia can work on the web (client side).

Aside from my own special [hypothetical] data type (and DEC64) there is a standard one:

http://speleotrove.com/decimal/
"It brings together the relevant concepts from a number of ANSI, IEEE, ECMA, and ISO standards, and conforms to the decimal formats and arithmetic in the IEEE 754 standard (‘754-2008’) published by the IEEE in August 2008, and the new addition ISO/IEC/IEEE 60559:2011 standard, published by ISO in July 2011."

[Actually for each size there are two versions..]

[I wasn't aware that Patriot missiles missed Scud-missiles because of decimal issues.]

[Python already has Decimal.]


I can understand why decimal floating point isn't the default (killing performance). However it would be annoying always having to type e.g.:

>Decimal("1.1") + Decimal("0.1")
1.2

or even:
>1.1d + 0.1d

or whatever can be done (I do not yet know how to do stuff like this).

similar to:
--int-literals={32|64}   Select integer literal size independent of platform

Could this switch be made to work:
--float-literals={binary|decimal|rational}   Select floating point literal type independent of platform

and/or have it switchable on runtime (is int-literals)?

The sizes for decimal floating point available are the same (plus Decimal128).


Now for the web, meaning JavaScript (not Java, "aby" Adobe something.. Flash or Silverlight..) it doesn't have an int type only "number" that means double precision. Now, when I saw that it felt alien to me.. but maybe not to ordinary people. It can represent all of Int32 exactly (really up to 2^53 IIRC) but not Int64. Int16 and lower seem also to be a problem. Emscripten C2JavaScript certainly was a surprise and has some heroic optimizations. Can Julia get away with Float32 and map all Integer types to Float64 with an addition to int-literals?

Best regards,
Palli.

Simon Byrne

unread,
Dec 5, 2014, 1:06:05 PM12/5/14
to juli...@googlegroups.com
Hi Palli,

I'm exactly sure I completely understand your question, but I'll do my best to answer them:

1) Yes, I'm sure a decimal type would be useful, and probably a great way to learn about the details and intricacies of floating point. It could all be done in a package, except for the "1.1d + 1.2d" format, which would require modifications to the parser. In the interim, you could also use macro string literals:

d"1.1" + d"1.2"

Another great resource on binary/decimal conversion is Rick Regan's website:

I don't think we would ever want to have float literals as decimal by default: other than the fact that I don't think anyone has tried to compile julia on a Power6 or Sparc64 machine, there are numerical stability reasons for preferring a binary format.

2) In short, no. The original developers of Julia made a very deliberate choice to make integers as close to the concept of "machine integers" as possible, which behave very different from floats at the bit level. So Int and Float64 are most likely going to remain very different types.

Best,
Simon

Ivar Nesje

unread,
Dec 5, 2014, 1:31:12 PM12/5/14
to juli...@googlegroups.com
More compiler flags will make testing much harder. I would imagine lots of packages breaking if suddenly all float literal becomes decimal. We currently have lots of problems with the 32/64 bit difference, and I don't think it will be good to make more permutations to test portable code on.

If someone writes a Decimal type for Julia, they could also include a @decimal macro that would rewrite float literals inside, so that you would write

@decimal begin
    a = 0.1 + 1.1
end

Regards Ivar

Stefan Karpinski

unread,
Dec 5, 2014, 1:35:14 PM12/5/14
to Julia Dev
On Fri, Dec 5, 2014 at 1:31 PM, Ivar Nesje <iva...@gmail.com> wrote:

If someone writes a Decimal type for Julia, they could also include a @decimal macro that would rewrite float literals inside, so that you would write

@decimal begin
    a = 0.1 + 1.1
end

This would currently not work well since `0.1` is parsed as a Float64 and represented that was in the AST. By the time the macro gets the value, it's too late. 

Stefan Karpinski

unread,
Dec 5, 2014, 1:36:17 PM12/5/14
to Julia Dev
Writing this as dec"1.1" seems good to me and pretty self-explanatory to the reader.

Ivar Nesje

unread,
Dec 5, 2014, 1:59:38 PM12/5/14
to juli...@googlegroups.com
The macro will work, but you'll be limited to about 16 decimal digits of input precision, because the rest of the information will be lost in parsing. I would think that the most common reason to work in decimal, is to ensure arithmetic and roundof so that money amounts sum to exactly 0.00, not to have more than 16 decimal digits on the input. This would be a much bigger issue for @BigFloat, where you'd expect to to be able to have literals with more than 16 digits.

Ivar

Páll Haraldsson

unread,
Dec 5, 2014, 2:23:33 PM12/5/14
to juli...@googlegroups.com



On Friday, December 5, 2014 6:36:17 PM UTC, Stefan Karpinski wrote:
Writing this as dec"1.1" seems good to me and pretty self-explanatory to the reader.

Yes and no. You could do that and that is a start without changing core Julia. Some users will care about the speed of Julia and know binary floating point is not exact and be careful. The rest will not know, might even use Julia as a "calculator". I'm not saying decimal should be the default but getting decimal behaviour even from the same source code seem to be a good thing as an option. I'm not sure how often, really, you want to mix and match but you could hardcode either format if you know what you are doing. It seems to be the Julia way.

Best regards,
Palli.

Kevin Squire

unread,
Dec 5, 2014, 2:25:15 PM12/5/14
to juli...@googlegroups.com
Actually, 0.1d could almost work for literals, (modulo converting the floating point number to decimal), by letting "d" be a singleton instance of, e.g., a DecimalLiteral type, and defining multiplication between Float64 and DecimalLiterals.  It would have some limitations, though.  See FixedPointNumbers.jl for an example. 

Cheers,
  Kevin 

Páll Haraldsson

unread,
Dec 5, 2014, 5:42:53 PM12/5/14
to juli...@googlegroups.com

On Friday, December 5, 2014 6:06:05 PM UTC, Simon Byrne wrote:
Hi Palli,

I'm exactly sure I completely understand your question, but I'll do my best to answer them:

"not" intended there?

1) Yes, I'm sure a decimal type would be useful, and probably a great way to learn about the details and intricacies of floating point. It could all be done in a package, except for the "1.1d + 1.2d" format, which would require modifications to the parser. In the interim, you could also use macro string literals:


 I'm sure I saw a package that allowed 10V meaning 10 Volts, as a zero overhead int type.. Why I thought this possible. Still, yes the parser probably sees the number first and insists on binary floating point parsing first. Could the order matter (d10 vs. 10d)?

d"1.1" + d"1.2"

This will work to begin with but would make Julia second class to COBOL, not a good solution if decimal fractional numbers are in some way preferred to binary.
 

Another great resource on binary/decimal conversion is Rick Regan's website:

Funny that you mention this site. At first it seemed the domain had expired and I had gotten something unrelated:

On Monday, Psy’s Gangnam Style video exceeded the limit of YouTube’s view counter; this is what Google had to say (hat tip: Digg):

    “We never thought a video would be watched in numbers greater than a 32-bit integer (=2,147,483,647 views)…”


 I was going to add another reason for this proposal and forgot; a "Python-mode" (there are pros and cons to their way and Julia's) for int:

-int-literals={32|64|BigInt}

 

I don't think we would ever want to have float literals as decimal by default: other than the fact that I don't think anyone has tried to compile julia on a Power6 or Sparc64 machine, there are numerical stability reasons for preferring a binary format.

FYI. Intel objected to the format, that is why there are versions meant for software only. They seemed complicated to decode at first but are probably not so bad in the common case. The original standard meant for hardware can of course also be emulated. Both define the same numbers up to:

https://en.wikipedia.org/wiki/Decimal128_floating-point_format

"34 significant digits" or "16 decimal digits" for the 64-bit type. Most hardware and software users use the 128-bit version..
 

2) In short, no. The original developers of Julia made a very deliberate choice to make integers as close to the concept of "machine integers" as possible, which behave very different from floats at the bit level. So Int and Float64 are most likely going to remain very different types.

But what "machine"? JavaScript is really a virtual machine without integers.. Not sure if Brendan Eich was brilliant or crazy when he made this shit up.

Surprisingly:

 julia> typealias Int Float64
Float64

"works" already? But does nothing for literals though. Floating point numbers however can index into arrays already in Julia.

Int already was not a real type, only a typealias for the realt int types..

Best regards,
Palli.

Páll Haraldsson

unread,
Dec 5, 2014, 6:06:58 PM12/5/14
to juli...@googlegroups.com

On Friday, December 5, 2014 6:31:12 PM UTC, Ivar Nesje wrote:
More compiler flags will make testing much harder.

Or easier, depending on your point of view. If decimal gives correct results and binary slightly wrong or badly wrong.. then if you can simply rerun with decimal instead of binary you can expose bugs in code? Another option would be substitute with interval arithmetic. But what do I know.

I would imagine lots of packages breaking if suddenly all float literal becomes decimal.

Of course not just the literals, the types would also change. Why would the packages break if everywhere binary changed to decimal consistently? I can really only FFI (maybe) stop working possibly and serialize not input output from textual representation (except JSON.. defines numbers as IEEE floating point, still the decimal versions of IEEE seem close enough).

Best regards,
Palli.

Ivar Nesje

unread,
Dec 5, 2014, 6:25:00 PM12/5/14
to juli...@googlegroups.com
I talked about testing to ensure that your code works on other peoples configurations. 10 options is 1024 configurations to test, because they interact in unknown ways. Add 5 more options and you're out of luck.

Lots of packages make assumptions about on the properties of Float64, and would fail if they suddenly were decimal.

Jason Merrill

unread,
Dec 5, 2014, 7:51:36 PM12/5/14
to juli...@googlegroups.com
On Friday, December 5, 2014 3:06:58 PM UTC-8, Páll Haraldsson wrote:

On Friday, December 5, 2014 6:31:12 PM UTC, Ivar Nesje wrote:
More compiler flags will make testing much harder.
If decimal gives correct results and binary slightly wrong or badly wrong.. then if you can simply rerun with decimal instead of binary you can expose bugs in code?

There are a few problems where decimals really do help--e.g. monetary calculations where the input data are specified in decimals and the operations are limited to (+, -, *), or places in a UI where a user can specify a range by bounds and a step, and they expect that stepping from 0 to 1 by 0.1 will be exact--but in many many other cases, it's wrong to suppose that decimal calculations will give "correct results," and binary will give "slightly wrong" results.

If you need exact results over (+,-,*), many times you can use integers, and then the binary/decimal distinction doesn't matter. If you need exact results with division, neither binary nor decimal floats will give you that, but rationals can. If you're doing numerical (i.e. not symbolic) calculations with transcendental functions (sin, cos, exp, log, etc.), then you're going to have some kind of round off error, and binary floats actually have stability advantages here. Need more than 16 digits of precision? Binary BigFloat can get you that as well as a Decimal BigFloat could.

In general, it isn't reasonable to take code that was written with binary floats in mind, and expect that running it by interpreting its literals as decimal floats instead will be an improvement. Sometimes it will make the results less accurate--not more.

I'd encourage you to look at some of the problems where you think binary floats are deficient and ask whether they could be solved well by one of Int, BigInt, Rational, or BigFloat. If the answer to all of these is still "no", I'd encourage you to follow up with the idea of a Decimal type with a d"1.1" literal.

You probably won't be able to make a very convincing case for messing with the parser unless you can show off some really compelling applications where it's obvious that a Decimal type is the best way, and it's also obvious that all the d"..." literals are too distracting.

Steven G. Johnson

unread,
Dec 5, 2014, 10:41:51 PM12/5/14
to juli...@googlegroups.com
Decimal floating-point is attractive in cases where human inputs need to be preserved as exactly as possible while maintaining a large dynamic range and decent performance.  However, my understanding is that the "inputs" in such applications generally come from *outside* the code (e.g. from external files, databases, UIs, etc.), in which case the Julia literal format is irrelevant.

So, even in these applications, it's not at all clear to me how much you save by being able to write 1.1 or 1.1d instead of d"1.1" literals in the source code...how many decimal literals would you actually need?  A few dozen in a large program?  To support switchable literals, Julia would therefore be paying a huge price in complexity for arguably very little benefit.

Páll Haraldsson

unread,
Dec 6, 2014, 10:48:18 AM12/6/14
to juli...@googlegroups.com

Note, in engineering (decimal culture), "0.1" is not the same as "0.10" meaning float first parser would not work while the decimal float formats handle both differently. One reason for decimal float rather than just decimal fixed point. See below:


On Saturday, December 6, 2014 3:41:51 AM UTC, Steven G. Johnson wrote:
Decimal floating-point is attractive in cases where human inputs need to be preserved as exactly as possible

Right,

Actually I wasn't sure we needed decimal floating point and thought decimal fixed point was (often) ok. I've been thinking about this more deeply than I ever have before bringing this topic up. Then I started to have similar doubts as you. You wrote before I could:

while maintaining a large dynamic range and decent performance.  However, my understanding is that the "inputs" in such applications generally come from *outside* the code (e.g. from external files, databases, UIs, etc.), in which case the Julia literal format is irrelevant.

Yes, probably very few decimal literals need to be in a program. This switchable idea of mine may not be worth it. For interactive use however the REPL is the input. This may not be a problem and/or just annoying for you to write more for literals.
 

So, even in these applications, it's not at all clear to me how much you save by being able to write 1.1 or 1.1d instead of d"1.1" literals in the source code...how many decimal literals would you actually need?

Potentially just one small decimal literal and you get the wrong answer (considering your decimal expectations)?

Best regards,
Palli.

Jeff Bezanson

unread,
Dec 9, 2014, 1:57:21 PM12/9/14
to juli...@googlegroups.com
--int-literals is a bad option that should probably be removed. The
only reason it makes any sense is that it emulates the
re-interpretation of literals as 32- or 64-bit that we already do
anyway when moving code between platforms. The idea was that you could
develop on 32-bit, pass --int-literals=32, and have less stuff change
when you move the code to a 64-bit machine. Changing the types of
literals out from under a program is not generally a good idea.

Another consideration is that Int32 and Int64 give the same (==)
answers for arguments within a reasonable range. In contrast, Decimal
and Float will give different answers for most fractions.

I'm open to the idea of adding 1.1d0 decimal literal syntax.

Jason Merrill

unread,
Dec 9, 2014, 2:31:08 PM12/9/14
to juli...@googlegroups.com


On Tuesday, December 9, 2014 10:57:21 AM UTC-8, Jeff Bezanson wrote:

I'm open to the idea of adding 1.1d0 decimal literal syntax.


That syntax currently means implicit multiplication.
   
    julia> d0 = 2.0
    2.0

    julia> 1.1d0
    2.2

Simon Byrne

unread,
Dec 9, 2014, 2:35:32 PM12/9/14
to juli...@googlegroups.com


On Tuesday, 9 December 2014 18:57:21 UTC, Jeff Bezanson wrote:
I'm open to the idea of adding 1.1d0 decimal literal syntax.

It would be great if there was a consistent way to do this for other types as well, e.g. BigFloats, BigInts, various types in FixedPointNumbers.jl, etc. 

Stefan Karpinski

unread,
Dec 9, 2014, 2:42:29 PM12/9/14
to juli...@googlegroups.com
I not too psyched about the gradual erosion of numeric literal juxtaposition.

Jeff Bezanson

unread,
Dec 9, 2014, 7:21:01 PM12/9/14
to juli...@googlegroups.com
Totally valid point. I'm also fine with d"1.1", which can be
implemented today without controversy, and generalizes better.

Erik Schnetter

unread,
Dec 13, 2014, 5:56:48 PM12/13/14
to juli...@googlegroups.com
On Tue, Dec 9, 2014 at 1:57 PM, Jeff Bezanson <jeff.b...@gmail.com> wrote:
> --int-literals is a bad option that should probably be removed. The
> only reason it makes any sense is that it emulates the
> re-interpretation of literals as 32- or 64-bit that we already do
> anyway when moving code between platforms. The idea was that you could
> develop on 32-bit, pass --int-literals=32, and have less stuff change
> when you move the code to a 64-bit machine. Changing the types of
> literals out from under a program is not generally a good idea.
>
> Another consideration is that Int32 and Int64 give the same (==)
> answers for arguments within a reasonable range. In contrast, Decimal
> and Float will give different answers for most fractions.
>
> I'm open to the idea of adding 1.1d0 decimal literal syntax.

This means "1.1 with double precision" in Fortran. Giving it a
different meaning in Julia would be confusing.

-erik

--
Erik Schnetter <schn...@cct.lsu.edu>
http://www.perimeterinstitute.ca/personal/eschnetter/

Scott Jones

unread,
Apr 1, 2015, 11:48:04 AM4/1/15
to juli...@googlegroups.com
I am also very interested in having both fixed size (probably using the IEEE 754-2008 32/64/128 bit decimal formats), as well as arbitrary precision decimal arithmetic (like Java's BigDecimal or Python's Decimal packages).

I've worked for the last almost 30 years consulting for a software company whose language's numbers were a scaled decimal format (64-bit signed integer, with 1-byte signed scale)... I implemented portable C code
for that format for 16-bit, 32-bit (back in 1986) and later 64-bit platforms (just for basic functions, +, -, *, division, integer division, modulo)...

There are a large number of applications where you really don't want any conversions/rounding going on (think financial applications)... there's a good reason why SQL has NUMERIC/DECIMAL types...

To me, this is the biggest flaw in Julia, that it doesn't have built-in support for decimal arithmetic, and that floating point literals are not decimal by default... it seems to me that since you can always convert later
from the decimal to a binary floating format, why would you ever want to incur the conversion immediately (and essentially lose information in the process)?.

Scott

Stefan Karpinski

unread,
Apr 1, 2015, 11:57:02 AM4/1/15
to juli...@googlegroups.com
In short: because you want the default to be reasonably fast, which decimal arithmetic is not.

Scott Jones

unread,
Apr 1, 2015, 12:35:39 PM4/1/15
to juli...@googlegroups.com
I should have been clearer... I didn't mean that the default type for calculations should be arbitrary precision decimal, but rather that numeric *literals* should be kept as such until they actually need to be converted for use...
From what I've read here, it seems Julia has some other issues already related to converting numeric literals to fixed size numeric representations early on in the parsing process.

What I would *like* to see, would be to have something like the decNumber package used to add a set of Decimal types, for example: Decimal32, Decimal64, Decimal128, BigDecimal,
and have the rules such that if you have an operation between a binary float and a literal, the literal is converted to the appropriate binary float type, but if you have a decimal float type,
it would be converted to the appropriate decimal float type.
If you have two literals, then I think, since numeric literals really *are* arbitrary precision *decimal*, that the calculation would be done as well with decimal arithmetic...
i.e. 0.1*10 would get you the integer 1...

Integers also - that int-literal thing should go... I think Python 3.4 got this right, at least it got rid of the long type, and all ints (and their literals) are now arbitrary precision...

Scott

Stefan Karpinski

unread,
Apr 1, 2015, 1:07:02 PM4/1/15
to juli...@googlegroups.com
Let's suppose 0.1 and 0.2 are Decimal64. Then what does 0.1 + 0.2 produce? Presumably another Decimal64. That would continue through a *lot* of computations. But in most real-world numerical code, you want everything done with the native Float64 type. Adding in a literal type that's separate from the decimal type just makes the situation even more problematic since you can have functions that return the literal type – making it no longer a literal type really.

Having all integers be arbitrary precision is an acceptable choice if you don't care that much about performance. In particular, if your standard library is written in a different language that has machine-precision integers, then it's totally reasonable. If you want a language with sufficiently good performance to implement its own standard library and still have it be fast, then you really don't want to do this.

Simon Byrne

unread,
Apr 1, 2015, 1:09:41 PM4/1/15
to juli...@googlegroups.com
On Wednesday, 1 April 2015 17:35:39 UTC+1, Scott Jones wrote:
I should have been clearer... I didn't mean that the default type for calculations should be arbitrary precision decimal, but rather that numeric *literals* should be kept as such until they actually need to be converted for use...


The problem with a float literal type is that

x = 0.1; foo(x)
foo(0.1)

can mean different things, which is even worse than the whole FLT_EVAL_METHOD thing in C (which is at least somewhat understandable, once you know about it).

Integers also - that int-literal thing should go... I think Python 3.4 got this right, at least it got rid of the long type, and all ints (and their literals) are now arbitrary precision...

I think this is unlikely to happen: see this blog post (and comments below):

Tim Holy

unread,
Apr 1, 2015, 1:22:58 PM4/1/15
to juli...@googlegroups.com
Scott, if this is something you need, consider checking
https://github.com/tinybike/Decimals.jl
and possibly pitching in on the development.

--Tim
> >>> ECMA, and ISO standards <http://speleotrove.com/decimal/#standards>,
> >>> and conforms to the decimal formats and arithmetic in the IEEE 754
> >>> standard <http://en.wikipedia.org/wiki/IEEE_754-2008> (‘754-2008’)
> >>> published by the IEEE
> >>> <http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=4610935> in
> >>> August 2008, and the [image: new addition] ISO/IEC/IEEE 60559:2011
> >>> <http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?
> >>> csnumber=57469> standard, published by ISO in July 2011."

Scott Jones

unread,
Apr 1, 2015, 1:28:17 PM4/1/15
to juli...@googlegroups.com
I could live with a *short* way of denoting a decimal literal, like d"1.45e50", even though it is a pain.
The big problem is simply not having Decimal types built-in to the language (even using an outside package like decNumber, as BigFloat also uses a library).

I was really talking about integer *literals* being arbitrary precision... having an int-literal setting seems like a bad idea...
You are correct, that for Julia, having all integers be BigInts would not be good.

Scott 

Stefan Karpinski

unread,
Apr 1, 2015, 1:35:13 PM4/1/15
to juli...@googlegroups.com
On Wed, Apr 1, 2015 at 1:28 PM, Scott Jones <scott.pa...@gmail.com> wrote:
I was really talking about integer *literals* being arbitrary precision... having an int-literal setting seems like a bad idea...

You don't need literals to be arbitrary precision – they can just have the necessary precision:

julia> typeof(1)
Int64

julia> typeof(18446744073709551616)
Int128

julia> typeof(340282366920938463463374607431768211456)
Base.GMP.BigInt

Stefan Karpinski

unread,
Apr 1, 2015, 1:38:48 PM4/1/15
to juli...@googlegroups.com
On Wed, Apr 1, 2015 at 1:28 PM, Scott Jones <scott.pa...@gmail.com> wrote:
I could live with a *short* way of denoting a decimal literal, like d"1.45e50", even though it is a pain.
The big problem is simply not having Decimal types built-in to the language (even using an outside package like decNumber, as BigFloat also uses a library).

You can implement this in a library so why does it need to be built into the language? Technically, Int64 isn't even built into the language – it's defined in the standard library.

Scott Jones

unread,
Apr 1, 2015, 1:38:52 PM4/1/15
to juli...@googlegroups.com
Ah, I had been trying to find something like that... very good.  I've been doing searches all over like "Julia  Decimal floating arithmetic", but hadn't come across that yet
(mostly found a lot of discussion about literals and rounding problems)
I'm fairly new with Julia, I just started investigating it less than three weeks ago, and actively programming in it this last week, but I must say,
it's my favorite new language in many years :-)  (Scheme, CLU and C had been, but that's been almost 35 years since I learned those!)
[I was lucky, I was an undergraduate TA for Sussman & Abelson the first term that Scheme was taught in 6.001, and I learned CLU from Barbara Liskov (and was also
an undergraduate TA for a few years for 6.170 & 6.035)]

I'll investigate it now.
Does it use the decNumber package (which, from what I've heard, is fast and well-tested)?

I'd definitely be up for testing, possibly even bug-fixing, yes...
Thanks,
Scott

Scott Jones

unread,
Apr 1, 2015, 1:40:32 PM4/1/15
to juli...@googlegroups.com
Yes, that's precisely (pun intended) what I want...

What is int-literal for then?  Did I misunderstand that it made integer literals always 32 or 64 bits?

Thanks,
Scott

Tim Holy

unread,
Apr 1, 2015, 1:44:10 PM4/1/15
to juli...@googlegroups.com
To clarify, I have nothing to do with that package and know nothing about it;
I just was about to propose you start on a DecimalNumbers.jl package (like the
FixedPointNumbers.jl package that I do contribute to) and decided I should
search to see if it already existed.

Even when you don't get the name exactly right, the ".jl" at the end of
package names is a huge hint that google seems to exploit very effectively.

--Tim

Scott Jones

unread,
Apr 1, 2015, 1:56:42 PM4/1/15
to juli...@googlegroups.com
Ah, now I'll have to look into your FixedPointNumbers.jl package as well...

My issue is that I do have data from SQL databases (either in fixed point, or with a decimal scale as in Oracle's NUMERIC data type), that I'd like to easily deal with in Julia, without converting them to BigFloats and having Decimal->Float->Decimal conversion problems.
I also want to be able to take #s from JSON and store those, without losing any information (and not just storing them all as strings) (and without the assumption that many people seem to make, i.e. that JSON == JS, and that IEEE doubles are fine for all numbers!).

Scott

Scott Jones

unread,
Apr 1, 2015, 2:21:05 PM4/1/15
to juli...@googlegroups.com
Unfortunately, it looks like FixedPointNumbers.jl is a binary fixed point, not a decimal fixed point,
and hence is still unsuitable for what I need to do...
I may end up having to try to interface to the decNumber package myself...
I'm just surprised that it hasn't already been done... there's so much neat stuff in Julia already...


On Wednesday, April 1, 2015 at 1:44:10 PM UTC-4, Tim wrote:

Pierre-Yves Gérardy

unread,
Apr 1, 2015, 2:31:45 PM4/1/15
to julia-dev
I think that the crux of the problem is that, quoting Stefan:

> `0.1` is parsed as a Float64 and represented that was in the AST.

If decimal numbers were parsed as Exprs and converted to the
appropriate type at compile time (defaulting on Float64), a `@decimal`
macro could walk and patch the AST while preserving the full
precision.

—Pierre-Yves
—Pierre-Yves

Tim Holy

unread,
Apr 1, 2015, 2:43:15 PM4/1/15
to juli...@googlegroups.com
I didn't really mean to suggest FixedPointNumbers.jl was was you wanted, just
that it's available as another model. I suspect Decimals.jl is a much better
choice, and it seems like it would be far better to contribute to that than
start your own.

--Tim

Scott Jones

unread,
Apr 1, 2015, 3:20:17 PM4/1/15
to juli...@googlegroups.com

OK, I should have said, as built-in as BigFloat is...

I hadn't realized that Int64 was just part of the library, that's nice...

Scott

Stefan Karpinski

unread,
Apr 1, 2015, 3:34:47 PM4/1/15
to juli...@googlegroups.com
On Wed, Apr 1, 2015 at 2:31 PM, Pierre-Yves Gérardy <pyg...@gmail.com> wrote:
I think that the crux of the problem is that, quoting Stefan:

> `0.1` is parsed as a Float64 and represented that was in the AST.

If decimal numbers were parsed as Exprs and converted to the
appropriate type at compile time (defaulting on Float64), a `@decimal`
macro could walk and patch the AST while preserving the full
precision.

This would be a possible way to go. In fact, Int128 and BigInt literals invoke macros. That kind of arrangement could even allow changing the local meaning of integer literals. I'm not sure how good an idea that is and I think that there would be some big issues with performance of the parser if we did this, but it's not impossible.

Steven G. Johnson

unread,
Apr 27, 2015, 1:15:02 PM4/27/15
to juli...@googlegroups.com
FYI, I've just posted a draft package for IEEE decimal floating-point arithmetic based on the Intel library:


This is very different from the Decimals.jl package because it implements fixed-precision (32, 64, and 128-bit) decimal floating point, not arbitrary precision.  This should make it much faster (in a quick test it was 100x faster than Decimals, though still 100x slower than Float64).

It's not feature-complete yet, or even ready for METADATA, but I think it is a good starting point for more work if people are interested.

Scott Jones

unread,
Apr 28, 2015, 10:05:40 PM4/28/15
to juli...@googlegroups.com
Is this the right place for discussion of it, or is there already a place?
This is great... and been thinking about making a wrapping for decNumber, and would like to see if this will meet most of the needs (at least for now).

I would like to know:
1) what format you are supporting, either the packed format, or the one with binary integers with scale?  (the binary form is supposed to be much faster for software implementations)
2) you said that it was 100x slower than IEEE binary floating point, do you have any numbers (on gist maybe? where I've been told to put such things)
3) what sorts of operations did you benchmark?
4) what sorts of exceptions are thrown? (or can you get the status after an operation somehow)?
5) How are things like rounding formats set?

Wonderful stuff!

Scott

Steven G. Johnson

unread,
Apr 28, 2015, 10:46:38 PM4/28/15
to juli...@googlegroups.com


On Tuesday, April 28, 2015 at 10:05:40 PM UTC-4, Scott Jones wrote:
Is this the right place for discussion of it, or is there already a place?
 
Probably Julia-users would be better (see the ANN message there)
 
I would like to know:
1) what format you are supporting, either the packed format, or the one with binary integers with scale?  (the binary form is supposed to be much faster for software implementations)

The Intel library implements the IEEE 754-2008 binary integer significand, not the packed decimal format, I believe.  (You can easily see that the significand is binary if you do reinterpret(UInt64, x::Dec64), for example.)
 
2) you said that it was 100x slower than IEEE binary floating point, do you have any numbers (on gist maybe? where I've been told to put such things)

I haven't done much benchmarking; the 100x number is just from "sum" on an array of 100000 random 64-bit values, comparing decimal to binary.  It seemed like a reasonable number to me for software fp, so I didn't look much further.

(I also tried the Decimals.jl package on the same data; this is arbitrary-precision arithmetic, and unsurprisingly it was another factor of 100x slower.)
 
4) what sorts of exceptions are thrown? (or can you get the status after an operation somehow)?

The Intel library sets a global exception flag (overflow, underflow, etc.).  Right now I'm not doing much with them; I'm imitating the Julia binary floating point and only throwing exceptions in a few cases, mainly for inexact conversions to/from integer types.  But this could easily be changed.
 
5) How are things like rounding formats set?

With set_rounding, exactly as for Julia's binary fp types.  e.g. set_rounding(Dec64, RoundDown).
Reply all
Reply to author
Forward
0 new messages