Rational Chebyshev approximations for the special functions

143 views
Skip to first unread message

Ondřej Čertík

unread,
Oct 2, 2012, 8:12:17 PM10/2/12
to sympy, Fredrik Johansson
Hi,

Does anyone have experience with implementing rational function approximations
to a given special function of one variable? This would be extremely
useful addition
to sympy. Here is an example for the error function from the standard
gfortran library:

https://github.com/mirrors/gcc/blob/master/libgfortran/intrinsics/erfc_scaled_inc.c

What happens is that whenever you call error function in a Fortran program, this
function will get called if you use gfortran. So it needs to be
accurate (in double
precision) and very fast.

If you look at the implementation, they split the real axis on several
intervals:

[0, 0.46875] 4 terms
(0.46875, 4] 8 terms
(4, oo) 5 terms

And in each interval they use a rational function approximation that
is guaranteed
to provide at least 18 significant decimal digits. I've indicated the
number of terms
for each interval above.

So you cannot get more accurate than that in double precision. In
terms of speed,
this is pretty much impossible to beat.


What I actually need is a similar approximation for modified Bessel functions
of half integer order. I've been learning the methods, originally I
thought I would
just implement a general hypergeometric function (of which the Bessel ones
are just a simple special case) and Fredrik has been super helpful with this,
as he implemented general solvers in mpmath for arbitrary precision. Mpmath
works great, but it's slow. I've implemented similar hypergeometric function
in Fortran for double precision, and it's still about 10x slower than my
series approximation from sympy (directly copy&pasted to Fortran).
The challenge is to choose intervals on which it works, and I've been
checking the accuracy by hand so far.

I really need this to be fast, so I realized
that the only way to nail this down once and for good is to use similar tricks
as the error function above.

The interface to sympy that I am imagining would be to give sympy a formula
(later maybe even just some numerical function for cases where there
is no simple
formula). For example the difficult part of I_{9/2}(x) is:

In [1]: r = (105/x**4 + 45/x**2 + 1)*sinh(x) - (105/x**3 + 10/x)*cosh(x)

this is a simple exact formula (I am actually lucky, that such a formula exists,
typically I only have a general hypergeometric series, that needs to
be summed up,
like the error function). However, even this formula *cannot* be used
for low "x", for example:

In [2]: r.subs(x, S(1)/10).n()
Out[2]: 1.05868215119243e-8

In [3]: r.subs(x, S(1)/10.)
Out[3]: 1.05937942862511e-8

Here [2] is the correct answer (using adaptive evaluation that Fredrik
implemented using mpmath),
while [3] is simply evaluating the formula using floating point
(similar to what Fortran does).
As you can see, from about 15 digits, the result [3] got only 3 digits
right due to numerical
cancellations. So that's unusable.
The solution that I implemented in my program for now is this:

In [4]: s = r.series(x, 0, 15).removeO()

In [5]: s
Out[5]: x**13/13232419200 + x**11/97297200 + x**9/1081080 + x**7/20790
+ x**5/945

In [6]: s.subs(x, S(1)/10).n()
Out[6]: 1.05868215119243e-8

In [7]: s.subs(x, S(1)/10.)
Out[7]: 1.05868215119243e-8

The [6] and [7] agrees to all significant digits, which just means
that the actual series can be summed
up using floating point accurately. Finally, the agreement of [6] with
[2] means that this series
gives accurate results (to all significant digits) with the exact answer.

So we know that for x=0.1, we can use this series. By experimenting
with this formula, I found out, that
for x > 4, the formula [1] gives exact answer using double precision.

In [8]: r.subs(x, S(4)).n()
Out[8]: 2.16278782780322

In [9]: r.subs(x, 4.)
Out[9]: 2.16278782780323

That's good enough. For lower than 4 it is less accurate, for example:

In [10]: r.subs(x, 1.)
Out[10]: 0.00110723646096744

In [11]: r.subs(x, S(1)).n()
Out[11]: 0.00110723646098546


Finally, the series [5] seems accurate up to x = 0.4

In [12]: r.subs(x, S(4)/10).n()
Out[12]: 1.09150288698177e-5

In [13]: s.subs(x, S(4)/10).n()
Out[13]: 1.09150288698173e-5


So after this painful analysis, we have found:

[0, 0.4] use [5]
[4, oo] use [1]


Now we can expand the function around x=1, and repeat the analysis
above until we cover the whole real axis. So first of all, this should
be automated.
But then the series expansion is *not* the best approximation, because
the series is very (too much) accurate around x=0, and barely accurate
around x=0.4.
A better approximation is to use a so called Chebyshev approximation,
which gives uniform accuracy (the end result is that less terms are
needed).

Finally, even better than just using one series, it's better to use a
rational function, which is a fraction of two polynomials. This seems
to be the most effective.

I found some algorithm in Numerical Recipes:

http://www.mpi-hd.mpg.de/astrophysik/HEA/internal/Numerical_Recipes/f5-13.pdf

it only needs a numerical function as input, which is the best.

Does anyone have any experience with this? This would be really cool
to have in sympy and it doesn't seem so difficult to implement.

Ondrej

Bharath M R

unread,
Oct 4, 2012, 5:50:20 AM10/4/12
to sy...@googlegroups.com, Fredrik Johansson
I had spent some time implementing rational chebychev approximation
as part of a course. The code is at [1]. It is mostly a port of the NR code.
The problems that will arise is how to determine the number of coefficients
that is to be calculated to achieve a certain accuracy. This will change with
the function and if we try to overfit the approximation, then the accuracy is
not that great. 


Thanks,
Bharath M R


PS: The code is really bad. I had just started out with python then.

Ondřej Čertík

unread,
Oct 4, 2012, 10:58:48 AM10/4/12
to sy...@googlegroups.com, Fredrik Johansson
Thanks for this. I have a little problem understanding what exactly the code
does --- but it looks simple enough.

I am currently quite busy, but eventually I need to learn this.

Ondrej

Fredrik Johansson

unread,
Oct 4, 2012, 11:02:39 AM10/4/12
to Ondřej Čertík, sympy
On Wed, Oct 3, 2012 at 2:12 AM, Ondřej Čertík <ondrej...@gmail.com> wrote:
> Hi,
>
> Does anyone have experience with implementing rational function approximations
> to a given special function of one variable? This would be extremely
> useful addition
> to sympy. Here is an example for the error function from the standard
> gfortran library:

This is something I've needed quite frequently, but I've never been
bothered enough to code it myself. I usually just use Mathematica
(EconomizedRationalApproximation or MiniMaxApproximation). It would be
great to have in sympy or mpmath.

Fredrik

Ondřej Čertík

unread,
Oct 4, 2012, 11:24:50 AM10/4/12
to Fredrik Johansson, sympy
Yes, this is it --- we need these two functions. I didn't know that Mathematica
had it, nice.

I've been thinking how to automatically cover the whole interval let's
say (0, oo)
and I would simply pick "m, n", then start on (0, a), call
EconomizedRationalApproximation, check
the error, if it is lower than 1e-18 (absolute error and probably also
some relative error),
then increase "a". Once we have the maximum "a", we do the interval
(a, b) and maximize
"b". And so on.

Ondrej

Bharath M R

unread,
Oct 4, 2012, 11:33:32 AM10/4/12
to sy...@googlegroups.com
I am not sure but I think that would take a lot of time. I think
we have to expose both number of coefficients and the ranges
(a, b) and let the user decide.

I think I can send a pull request regarding this soon(weekend).

-- 
Bharath M R
5th Year undergraduate student,
IIT Madras
Sent with Sparrow

--
You received this message because you are subscribed to the Google Groups "sympy" group.
To post to this group, send email to sy...@googlegroups.com.
To unsubscribe from this group, send email to sympy+un...@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/sympy?hl=en.

Ondřej Čertík

unread,
Oct 4, 2012, 12:15:09 PM10/4/12
to sy...@googlegroups.com
On Thu, Oct 4, 2012 at 8:33 AM, Bharath M R <catchmr...@gmail.com> wrote:
> I am not sure but I think that would take a lot of time. I think
> we have to expose both number of coefficients and the ranges
> (a, b) and let the user decide.

As the first step, absolutely. If possible, just follow the
Mathematica interface:

http://reference.wolfram.com/mathematica/FunctionApproximations/ref/EconomizedRationalApproximation.html

After this is all done and works well, we can add some higher level
that will optimize the intervals, but that's
not as important as having the EconomizedRationalApproximation in sympy.

>
> I think I can send a pull request regarding this soon(weekend).

That'd be really cool, thanks!
Send it, and we can improve upon it.

Ondrej

Ondřej Čertík

unread,
Oct 4, 2012, 9:02:00 PM10/4/12
to sympy
On Tue, Oct 2, 2012 at 5:12 PM, Ondřej Čertík <ondrej...@gmail.com> wrote:
[...]
> For example the difficult part of I_{9/2}(x) is:
>
> In [1]: r = (105/x**4 + 45/x**2 + 1)*sinh(x) - (105/x**3 + 10/x)*cosh(x)

[...]

> So after this painful analysis, we have found:
>
> [0, 0.4] use [5]
> [4, oo] use [1]
>
>
> Now we can expand the function around x=1, and repeat the analysis
> above until we cover the whole real axis. So first of all, this should
> be automated.
> But then the series expansion is *not* the best approximation, because
> the series is very (too much) accurate around x=0, and barely accurate
> around x=0.4.
> A better approximation is to use a so called Chebyshev approximation,
> which gives uniform accuracy (the end result is that less terms are
> needed).
>
> Finally, even better than just using one series, it's better to use a
> rational function, which is a fraction of two polynomials. This seems
> to be the most effective.

I've used MiniMaxApproximation in Mathematica for the function
I_{4+1/2}(x) and using trial and error
I divided the real axis into the intervals (0, 0.2), (0.2, 1.7), (1.7,
4), (4, 10), (10, 20), (20, oo),
in the first I use the series from sympy, second, third and fourth
uses rational approximation
from Mathematica, fifth uses the exact analytic formula using
sinh/cosh, and the last one
uses the fact that sinh(x)/exp(x) = 1/2 for x from (20, oo).

The Python code and graphs are here:

https://gist.github.com/3837394

You can see from the error plot, that the rational approximation is
accurate to machine precision
(compared to the "exact" answer from mpmath), pretty much uniformly for any "x".
A possible improvement is to use the Horner's rule to evaluate the
polynomials, but otherwise
I think this is pretty much as fast as it can be, the only way to make
it faster is to put there
more intervals and lower the polynomial order.

So this is the solution that I am looking for. It'd be nice to be able
to do the same thing using sympy or mpmath.

Ondrej

Ondřej Čertík

unread,
Oct 4, 2012, 9:04:42 PM10/4/12
to sympy
On Thu, Oct 4, 2012 at 6:02 PM, Ondřej Čertík <ondrej...@gmail.com> wrote:
[...]
> A possible improvement is to use the Horner's rule to evaluate the
> polynomials, but otherwise
> I think this is pretty much as fast as it can be, the only way to make
> it faster is to put there
> more intervals and lower the polynomial order.

I forgot to add, that the Fortran implementation is here:

https://github.com/certik/hfsolver/commit/7b8871fb64e748401abcb9803c74ab40b009c575

this is the one, which is really fast.

Ondrej

Bharath M R

unread,
Oct 13, 2012, 1:15:22 AM10/13/12
to sy...@googlegroups.com
Hi,
   I was trying to implement the rational chebyshev series and I hit a roadblock.
Sympy doesn't do eigen value decomposition of numerical matrices that well [1].

One way to get through it use the numpy's eigen value decomposition i.e. import
it using the ``import_module`` and get the eigen vectors. But it will add a dependency
on numpy for the series computation.

Is it the right way to go about it? I know that the plotting module has a dependency on
numpy and matplotlib. Can I implement Rational Chebyshev with numpy's eig?

Thanks,
Bharath M R


Aaron Meurer

unread,
Oct 14, 2012, 12:51:57 AM10/14/12
to sy...@googlegroups.com
We should investigate why it can't compute the eigenvectors, and fix
it. It should be able to do it.

Aaron Meurer
> --
> You received this message because you are subscribed to the Google Groups
> "sympy" group.
> To view this discussion on the web visit
> https://groups.google.com/d/msg/sympy/-/cr-jJ2kib9QJ.
Reply all
Reply to author
Forward
0 new messages