I'd like to get some feedback on a multiprecision arithmetic library I've
been preparing for possible Boost inclusion. The code is in the sandbox
under the "big_number" directory. Docs can be viewed online here:
http://svn.boost.org/svn/boost/sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/intro.html
The main features are:
* A generic front-end that's capable of wrapping almost any type that's a
"number". The front end is expression template enabled and handles all the
expression optimization and code reduction. For example it's possible to
evaluate a polynomial using Horner's rule without a single temporary
variable being created.
* A series of backends that need only provide a reduced interface and set of
operations, implement the actual arithmetic, currently supported backends
are:
Integer Types
~~~~~~~~~
1/ GMP (MPZ).
2/ libtommath.
Rational Types
~~~~~~~~~~
1/ GMP (MPQ).
[ But note that the integer types can also be used as template aruments to
Boost.Rational ]
Floating Point Types
~~~~~~~~~~~~~~
1/ GMP (MPF)
2/ MPFR
3/ cpp_float - an all C++ Boost-licensed backend based on Christopher
Kormanyos' e_float code.
[ Note these three types are fully compatible with Boost.Math Trunk - so you
get full standard library plus special function support ]
There's still a bunch to do, but I'd like to see what folks think, and where
the main priorities should be before submission.
Thanks in advance, John.
_______________________________________________
Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
> * A generic front-end that's capable of wrapping almost any type that's
> a "number". The front end is expression template enabled and handles all
> the expression optimization and code reduction. For example it's
> possible to evaluate a polynomial using Horner's rule without a single
> temporary variable being created.
Does it mean you're translating the expression
result = a0 + x * (a1 + x * (a2 + x * a3));
to the expression
result = x,
result *= a3,
result += a2,
result *= x,
result += a1,
result *= x,
result += a0;
?
This is something we wanted to do as well for MPFR support in the NT2
library, but we never had the time to think about it properly.
It would be nice if the logic that did this transformation was
single-contained and re-usable (unless it's trivial).
It could probably be generalized to any node that has an in-place
equivalent, and I can think of some uses beyond integration with
libraries such as MPFR. It could work with anything where operations are
pure and commutative.
Nod, the two are treated the same - in fact I use MPIR for testing under
Visual Studio.
In effect, yes.
> This is something we wanted to do as well for MPFR support in the NT2
> library, but we never had the time to think about it properly.
>
> It would be nice if the logic that did this transformation was
> single-contained and re-usable (unless it's trivial).
> It could probably be generalized to any node that has an in-place
> equivalent, and I can think of some uses beyond integration with libraries
> such as MPFR. It could work with anything where operations are pure and
> commutative.
At present it's tied pretty heavily to the mp_number internals - basically
it's all done inside the expression-template unpacking code. Note that you
need to know what the ultimate target type of the transformation is as not
all combinations of operations can be converted to inplace equivalents, and
there are also some occasions when the out-of-place operations are actually
faster anyway (provided you already have the variables available).
John.
It looks really good!
One question: is it possible to an mp_number from one backend to another
one in some other backend (without having to go mp_number1 -> string ->
mp_number2 of course)?
It would be nice in the future to also have "all C++ Boost-licensed
backend" options for the "Integer Types" and "Rational Types"
Maybe ;-)
An mp_number is copy-constructible from anything that the backend is
copy-constructible from.
So related types typically interconvert - for example all the GMP supported
types interconvert, and can also be constructed from the raw GMP C struct
types. However, unrelated types do not interconvert, so for example a
number based on the cpp_float backend doesn't convert to one based on the
GMP MPF backend. Given that their internal data structures are quite
different I don't see how they can interconvert either - except via a
lexical_cast.
HTH, John.
>> Integer Types
>> ~~~~~~~~~
>> 1/ GMP (MPZ).
>> 2/ libtommath.
>>
>> Rational Types
>> ~~~~~~~~~~
>> 1/ GMP (MPQ).
>> [ But note that the integer types can also be used as template aruments to Boost.Rational ]
>>
>> Floating Point Types
>> ~~~~~~~~~~~~~~
>> 1/ GMP (MPF)
>> 2/ MPFR
>> 3/ cpp_float - an all C++ Boost-licensed backend based on Christopher Kormanyos' e_float code.
>> [ Note these three types are fully compatible with Boost.Math Trunk - so you get full standard library plus special function support ]
Jarrad Waterloo:
> It would be nice in the future to also have "all C++ Boost-licensed backend" options for the "Integer Types" and "Rational Types"
I love what John has done with this concept.
I am already way behind on a proposal for
boost::multiprecision which includes the boost-licensed
floating-point backend.
I just need to find a few days to complete the documentation.
You can find boost::multiprecision in the sandbox.
Best regards, Chris.
John, just from the description... I think, "wow!"
Let's have it!
--
Dave Abrahams
BoostPro Computing
http://www.boostpro.com
This is a really important development, long overdue as a tool in the Boost kit box, made possible
by Christopher Kormanyos arbitrary precision floating point library which is free of licensing
difficulties, so the whole setup *can* be Boost license. (Those who can use other restricted
licence packages like GMP can still do so and get a modest improvement in speed).
It should allow user to move seamlessly (well fairly ;-) from the built-in types to much higher
precision types.
This will allow tasks which can only be done now by moving to arbitrary precision like MatLab or
Mathmetica to be done in C++.
Only those bits of code that need high precision need to use it, leaving most of the code in C++
built-in types. Maximally accurate long double values can be computed and saved.
The cunning expression template implementation that John has produced offers some useful speedups
(but as he says in the docs, it doesn't give quite the speedup that the over-optimistic might
expect).
The other downside (as he also warns) is that not all existing code can use it without some code
changes. I fell straight into this pit by trying to use Boost.Test to check cpp_float IO exactly as
I had with Christopher Kormanyos's multiprecision. I had to use the Boost lightweight test version
instead, not a big problem, and one that can probably be 'cured' by some changes within Boost.Test.
So I see a need for *both* John's expression template enabled version *and* Christopher Kormanyos
version. So keep at it both of you!
Checks using Boost.Test also show that IO (after some grrrring by both Chris and John - it's a
really, really tiresome task) does as close as is reasonable to expect in matching the IO of
built-ins like double. This is highly desirable feature IMO. Boost.Test shows a reassuring number
of decimal digits when displaying failure messages!
As for priority, personally I think that the most important aspect is a Boost-license version of
integer and floating-point (with rational a slightly lower priority). Of course, for others being
able to use the 'gold-standard' GMP and MPFR may be more important.
Paul
---
Paul A. Bristow,
Prizet Farmhouse, Kendal LA8 8AB UK
+44 1539 561830 07714330204
pbri...@hetp.u-net.com
I strongly agree with those priorities. Below is a brief overview of
features I would expect from boost multiprecision library.
1) Bigint class that supports both heap and stack memory allocation for its
storage.
I am implementing robust predicates for geometric algorithms (voronoi
diagrams)
that operate with input objects with coordinates from integer domain.
Naturally this requires big integer class implementation.
I started with using gmp classes at first. The drawback is slow object
creation (because memory is allocated from heap).
One of the solutions is to split all the formulas onto binary operations
and store all the temporaries and variables as struct/class members.
However splitting complex formulas makes code very nasty and less readable.
I ended up with my own stack allocated big integer
implementation. This approach suits me perfectly as I don't store big
integer objects but just use them to evaluate predicate result.
I found a bigint library under sandbox/SOC/2007/bigint. It implements both
stack and heap memory allocated storage for bigint classes.
Not sure of the reasons it is not part of the boost, but it might be a good
idea to wrap it with multiprecision interface.
2) Wrappers around integral floating-point types to support wider exponent
range.
Another problem that I faced is conversion from bigint to double. The
maximum double exponent is 1023, which is quite small to
handle intermediate values of the geometric predicates. I ended up with
implementing floating point type wrapper that extends
double exponent to int64 range and think it is quite useful feature for
problems that operate with big integers.
Andrii
> 2) Wrappers around integral floating-point types to support wider
> exponent
> range.
> Another problem that I faced is conversion from bigint to double. The
> maximum double exponent is 1023, which is quite small to
> handle intermediate values of the geometric predicates. I ended up
> with
> implementing floating point type wrapper that extends
> double exponent to int64 range and think it is quite useful feature
> for
> problems that operate with big integers.
I like that idea. We have also been implementing our own floating point
numbers with a base of 10, because of requirements in the financial
business. I think it should be feasible to write a general wrapper for
floating point numbers with an arbitrary base as well. Taking all that
together we could have e.g. floating point numbers to a base of 77 with
five byte mantissa and 42 bytes exponent, if needed anywhere.
Christof
P.S.: Since I am rather new here, can someone please send me a link,
where I can teach myself about the boost development process?
--
okunah gmbh Software nach Maß
Werner-Haas-Str. 8 www.okunah.de
86153 Augsburg c...@okunah.de
Registergericht Augsburg
Geschäftsführer Augsburg HRB 21896
Christof Donat UStID: DE 248 815 055
While I agree that this functionality is useful especially for financial
problems, but it is probably not a good idea to try to unify it with
problem I mentioned.
Implementation of extended exponent wrapper around double/float types could
be very efficient. As in most cases it uses native double operators (e.g.
+,-,*,/,sqrt) plus
some additional magic with exponent bits which are stored in int64 for
double (or int32 for float). It also satisfies IEEE 754 standard
requirement for rounding of the result of next operations
+,-,*,/,sqrt without any additional overhead. To be completely clear I
didn't define INF and NaN in my implementation as my code is safe of
avoiding them. So the main reason against
providing generic wrapper to suit both our problems is that probably it
won't be so efficient in cases where you just need to extend double/float
exponent.
However functionality you mentioned will be a good addition as a separate
module.
> P.S.: Since I am rather new here, can someone please send me a link, where
> I can teach myself about the boost development process?
I would suggest those links as a good start:
http://www.boost.org/development/requirements.html
http://www.boost.org/development/submissions.html
Andrii
> ______________________________**_________________
> Unsubscribe & other changes: http://lists.boost.org/**
> mailman/listinfo.cgi/boost<http://lists.boost.org/mailman/listinfo.cgi/boost>
> So the main reason against
> providing generic wrapper to suit both our problems is that probably
> it
> won't be so efficient in cases where you just need to extend
> double/float
> exponent.
I have not taken my time to think about the overhead of your problem in
depth. I'd simply expect, that you know that area better. From my
current point of view I think, that the efficiency issue chould be
solved with specializations for base 2.
Is there a place, where I can read your code to see, how you have
implemented that?
>> P.S.: Since I am rather new here, can someone please send me a link,
>> where
>> I can teach myself about the boost development process?
>
> I would suggest those links as a good start:
> http://www.boost.org/development/requirements.html
> http://www.boost.org/development/submissions.html
Thanks.
Christof
--
okunah gmbh Software nach Maß
Werner-Haas-Str. 8 www.okunah.de
86153 Augsburg c...@okunah.de
Registergericht Augsburg
Geschäftsführer Augsburg HRB 21896
Christof Donat UStID: DE 248 815 055
_______________________________________________
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n1977.html
Yes:
https://svn.boost.org/svn/boost/sandbox/gtl/boost/polygon/detail/voronoi_robust_fpt.hpp
The two classes you are interested in are:
1) fpt_exponent_accessor - provides access (read/write) to double exponent
bits.
2) extended_exponent_fpt - the extended exponent floating-point (double)
type class itself.
This part of code is lacking documentation as it was recently added.
Andrii
I can think of very little application for a generic numeric type with arbitrary base. Base 2 and base 10 are the two useful bases, with base 10 being applicable to finance and base 2 for everything else. I think it makes sense for them to be two different libraries. There have been attempts to standardize a base 10 numerical data type in C++ going all the way back to the days when we thought the hardware natively support base 10 arithmetic. A base 10 numerical datatype in boost would be in the category of library targeting standardization. It is probably best that it be minimally coupled to other boost libraries and implemented by people proposing base 10 standard library extension.
Regards,
Luke
Why not use the built-in quadruple precision support of the compiler?
GCC, for example, has a __float128 type that implements the IEEE754
binary128 format.
For financial applications we would need an exact representation of numbers like 1.1 or 0.03551 this can be achieved by combining an integer (11, 3551) with a decimal exponent. This is somewhat orthogonal to the number of bits precision.
> ______________________________**_________________
> Unsubscribe & other changes: http://lists.boost.org/**
> mailman/listinfo.cgi/boost<http://lists.boost.org/mailman/listinfo.cgi/boost>