That's a great blog post Fredrik. Since your blog doesn't seem to have
a comments box, I will add my comments here.
One thing I would add to the mpmath history is the SymPy 1.0 release
(March 2016), which officially made mpmath an external dependency of
SymPy. Prior to that, a copy of mpmath shipped as sympy.mpmath.
I've been using mpmath (via SymPy) myself quite a bit in my own recent
research (computing the CRAM approximation to exp(-t) on [0, oo) to
arbitrary precision). I'm always amazed at how stable mpmath is. It
always gives what seem to be correct answers, or fails nicely if it
can't. While I did find some minor holes in mpmath (I had to tweak the
maxsteps and tol parameters of findroot (via sympy.nsolve), see
), it was quite
easy to work around it.
Regarding Arb, I would love to see Python bindings. I would suggest
writing some ArbPy wrapper library, so that people can use it in
Python on its own, and then we can use that to improve mpmath and
SymPy. There's been some interest in using something like Arb for code
generation. The idea is this: you can use SymPy to create a model for
something, and then use the codegen module to generate fast machine
code to compute it. But the problem is that you don't necessarily know
how precise that machine code is. What if there are numerical issues
that lead to highly inaccurate results? So the idea is to swap out the
backend for the code generator to something like Arb, and perform the
same computation with guaranteed bounds. This will obviously be slower
than the machine code, so you wouldn't use it in practice, but instead
you'd use it to get some assurance on the accuracy of your results
with machine floats. If the accuracy is bad, you might have to look
into modifying the algorithm. Or in the worst case, you just have to
use a slower arbitrary precision library to get the precision you
need. But critically, since everything is code generated, the whole
thing would (in theory at least) be as simple as changing some flag in
the code generator.