spherical bessel functions and utilities implemented

10 views
Skip to first unread message

Ondrej Certik

unread,
May 15, 2009, 2:32:00 AM5/15/09
to sympy-...@googlegroups.com
Hi,

please review my spherical bessel patches:

http://github.com/certik/sympy/tree/special

Thanks,
Ondrej

Priit Laes

unread,
May 15, 2009, 6:16:00 AM5/15/09
to sympy-...@googlegroups.com
Ühel kenal päeval, N, 2009-05-14 kell 23:32, kirjutas Ondrej Certik:
> Hi,
>
> please review my spherical bessel patches:
>
I didn't check the math part, but in the jn_zeros() docstring there's
method = "sympy" instead of "scipy" in one place.

Priit :)

Ondrej Certik

unread,
May 15, 2009, 3:05:37 PM5/15/09
to sympy-...@googlegroups.com

Ah, thanks for noticing:

+ method = "sympy": uses the scipy's sph_jn and newton to find all roots,
+ which if faster than method="sympy", but it requires scipy and only

this should be scipy.

Apart from that, are there any other objections?

Ondrej

Vinzent Steinberg

unread,
May 16, 2009, 5:51:55 AM5/16/09
to sympy-...@googlegroups.com
2009/5/15 Ondrej Certik <ond...@certik.cz>

> # The findroot() is fragile, it sometimes returns complex numbers,
> # so we chop all complex parts (that are small anyway). Also we
> # need to set the tolerance, as it sometimes fail without it.
> def f_real(x):
>     return f(complex(x).real)
> root = findroot(f_real, x, solver="muller", tol=1e-9)
> root = complex(root).real
 
As mentioned in my other post, I don't get the point of using method='muller', especially if you are not interested in complex roots.

I recommend to just use the default (secant), it won't give complex results for real starting points. Or you might want to use an intersection method, it would be more reliable and faster.

Do you think some of the functions should be rather implemented in mpmath?

Vinzent

Ondrej Certik

unread,
May 16, 2009, 2:42:24 PM5/16/09
to sympy-...@googlegroups.com

Unfortunately, it didn't converge for me at all. Any ideas why? I'll
try to write my own newton method and then see if it works.

>
> Do you think some of the functions should be rather implemented in mpmath?

which functions do you mean exactly? The jn_zeros? I don't mind. But
mpmath is sometimes very slow, e.g. I want to be able to use scipy
optionally. It can be in mpmath, I have no problems with that. But
then you would need the jn() function too in there, but that one uses
sympy a lot, so I don't think it's that easy.

Ondrej

Fredrik Johansson

unread,
May 16, 2009, 2:53:02 PM5/16/09
to sympy-...@googlegroups.com
>> Do you think some of the functions should be rather implemented in mpmath?
>
> which functions do you mean exactly? The jn_zeros? I don't mind. But
> mpmath is sometimes very slow, e.g. I want to be able to use scipy
> optionally. It can be in mpmath, I have no problems with that. But
> then you would need the jn() function too in there, but that one uses
> sympy a lot, so I don't think it's that easy.

Bessel function zeros, along with a couple of other inverse functions,
will benefit greatly from the more generic high-level interface
being added to mpmath.

Basically, the first 16 digits can be computed entirely using machine
precision arithmetic, and then in the high precision case, mpmath can
use the machine precision estimate as the initial argument to
findroot.

Fredrik

Ondrej Certik

unread,
May 16, 2009, 2:56:35 PM5/16/09
to sympy-...@googlegroups.com

That'd be cool -- but it will still be slower than scipy -- sometimes
I just need it fast, but it's ok that maybe the 10th digit is wrong.
So I want to have both.

How will you implement jn()? One can use the same thing as in scipy,
but I find it pretty cool that sympy can now (I mean after the patches
are accepted) find an exact formula for it.

Ondrej

Fredrik Johansson

unread,
May 16, 2009, 3:20:57 PM5/16/09
to sympy-...@googlegroups.com
On Sat, May 16, 2009 at 11:56 AM, Ondrej Certik <ond...@certik.cz> wrote:
> That'd be cool -- but it will still be slower than scipy -- sometimes
> I just need it fast, but it's ok that maybe the 10th digit is wrong.
> So I want to have both.

Some mpmath functions have turned out to be faster than their
scipy counterparts when I reimplemented them for float. I don't know
what the case will be for Bessel functions.

> How will you implement jn()? One can use the same thing as in scipy,
> but I find it pretty cool that sympy can now (I mean after the patches
> are accepted) find an exact formula for it.

Numerically, there's no point implementing spherical Bessel functions using
any other formula than sqrt(pi/2/z)*besselj(v+0.5,z),
except adding some additional code to handle special cases.

By the way, jn() denotes an ordinary Bessel function in C99 math.h
and scipy. You might want to change the names.

Fredrik

Ondrej Certik

unread,
May 16, 2009, 3:30:46 PM5/16/09
to sympy-...@googlegroups.com
On Sat, May 16, 2009 at 12:20 PM, Fredrik Johansson
<fredrik....@gmail.com> wrote:
>
> On Sat, May 16, 2009 at 11:56 AM, Ondrej Certik <ond...@certik.cz> wrote:
>> That'd be cool -- but it will still be slower than scipy -- sometimes
>> I just need it fast, but it's ok that maybe the 10th digit is wrong.
>> So I want to have both.
>
> Some mpmath functions have turned out to be faster than their
> scipy counterparts when I reimplemented them for float. I don't know
> what the case will be for Bessel functions.

Interesting.

>
>> How will you implement jn()? One can use the same thing as in scipy,
>> but I find it pretty cool that sympy can now (I mean after the patches
>> are accepted) find an exact formula for it.
>
> Numerically, there's no point implementing spherical Bessel functions using
> any other formula than sqrt(pi/2/z)*besselj(v+0.5,z),
> except adding some additional code to handle special cases.

That boils down to calling hyper() in mpmath/functions.py. The
question is whether that is faster than just evaluating one sin, one
cos and two simple polynomials.

e.g. is hyper comparably fast as for example sin? Or is it for example
3x slower?

>
> By the way, jn() denotes an ordinary Bessel function in C99 math.h
> and scipy. You might want to change the names.

Yeah, I was thinking about this. Actualli, C99 math.h doesn't have it,
only some extension:

http://en.wikipedia.org/wiki/Math.h#XSI_Extensions

sage uses spherical_bessel_J and bessel_J, mpmath uses besselj. I
thought we could use: Jn for besself functions and jn for spherical
bessel functions. scipy uses jn and sph_jn.

Ondrej

Vinzent Steinberg

unread,
May 16, 2009, 4:16:27 PM5/16/09
to sympy-...@googlegroups.com
2009/5/16 Ondrej Certik <ond...@certik.cz>

That's interesting. Could you please post the test case where it fails?

Maybe there's just a bug in mpmath's secant or findroot.
 


>
> Do you think some of the functions should be rather implemented in mpmath?

which functions do you mean exactly? The jn_zeros? I don't mind. But
mpmath is sometimes very slow, e.g. I want to be able to use scipy
optionally. It can be in mpmath, I have no problems with that. But
then you would need the jn() function too in there, but that one uses
sympy a lot, so I don't think it's that easy.


Well, Fredrik should decide this.


Vinzent

Ondrej Certik

unread,
May 16, 2009, 4:25:09 PM5/16/09
to sympy-...@googlegroups.com

Just pull my branch and run tests. Then change the findroot code and you'll see.

Ondrej

Fredrik Johansson

unread,
May 16, 2009, 6:12:48 PM5/16/09
to sympy-...@googlegroups.com
On Sat, May 16, 2009 at 12:30 PM, Ondrej Certik <ond...@certik.cz> wrote:
>> Numerically, there's no point implementing spherical Bessel functions using
>> any other formula than sqrt(pi/2/z)*besselj(v+0.5,z),
>> except adding some additional code to handle special cases.
>
> That boils down to calling hyper() in mpmath/functions.py. The
> question is whether that is faster than just evaluating one sin, one
> cos and two simple polynomials.
>
> e.g. is hyper comparably fast as for example sin? Or is it for example
> 3x slower?

It's quite a bit slower, but so is evaluating a polynomial.
The expanded formula is probably faster only for n <= N
where N = 2-3 or so.

mpmath.besselj could be improved to use faster series code
for half-integer arguments (as it does for integer arguments).
Then the benefits would immediately apply to both
besselj and spherical Bessel functions.

Fredrik

Vinzent Steinberg

unread,
May 17, 2009, 8:23:58 AM5/17/09
to sympy-...@googlegroups.com
On May 16, 10:25 pm, Ondrej Certik <ond...@certik.cz> wrote:
>
> Just pull my branch and run tests. Then change the findroot code and you'll see.
>
> Ondrej

This is interesting. The secant method (used by default by findroot()) needs two starting values. If the user specifies only one value x, the second is automatically x + 0.25. Due to this, it sometimes did not find the correct root. You just found a case where this happens. :)
The easiest solution is to use (x, x + 0.01) instead of x as a starting point. Maybe Newton's method should be implemented in mpmath. I thought secant would be superior in any case, so I didn't do so... Or maybe we should choose a more sane default starting point for secant.

I attached a patch which fixes the issues with findroot in this case. If you want to play around with it, uncomment the debug print statements, it compares findroot's solution with scipy's.

Vinzent
0001-bessel-better-starting-points-for-secant-method.patch

Alan Bromborsky

unread,
May 17, 2009, 8:59:48 AM5/17/09
to sympy-...@googlegroups.com
Vinzent Steinberg wrote:
> On May 16, 10:25 pm, Ondrej Certik <ond...@certik.cz
A standard technique is to search by halves until you get an interval
you know you have a root in and switch to the secant, newton, or some
higher order method to polish the root. Secant might fail if there are
multiple roots in the interval determines by searching by halves, but
you will know immediately if that happens if any step of the secant
method goes outside the search by halves interval.

Ondrej Certik

unread,
May 17, 2009, 2:00:58 PM5/17/09
to sympy-...@googlegroups.com

Thanks for all the suggestions and thanks Vinzent for looking into
that. Let's fix the findroot() in mpmath, e..g create a newton method,
that only needs one point, close to the root and it converges by
itself.

Your patch is an improvement and I think it's good enough for now.
But if scipy can do that, we should be able to do it too.

Ondrej

Vinzent Steinberg

unread,
May 17, 2009, 2:47:11 PM5/17/09
to sympy-...@googlegroups.com
2009/5/17 Ondrej Certik <ond...@certik.cz>

Thanks for all the suggestions and thanks Vinzent for looking into
that. Let's fix the findroot() in mpmath, e..g create a newton method,
that only needs one point, close to the root and it converges by
itself.

Yeah, this will be trivial to implement.
 


Your patch is an improvement and I think it's good enough for now.
But if scipy can do that, we should be able to do it too.

In my humble opinion we are doing it better than scipy, because secant is faster. :)
Maybe it's even sufficient to change the standard starting points of secant.

Vinzent

Vinzent Steinberg

unread,
May 17, 2009, 2:48:59 PM5/17/09
to sympy-...@googlegroups.com
2009/5/17 Alan Bromborsky <abr...@verizon.net>

A standard technique is to search by halves until you get an interval
you know you have a root in and switch to the secant, newton, or some
higher order method to polish the root.  Secant might fail if there are
multiple roots in the interval determines by searching by halves, but
you will know immediately if that happens if any step of the secant
method goes outside the search by halves interval.

Are you refering to intersection-based methods like the Illinois method (combining bisection witch secant)?
If yes, it's already implemented.

Vinzent

Fredrik Johansson

unread,
May 17, 2009, 3:21:52 PM5/17/09
to sympy-...@googlegroups.com
On Sun, May 17, 2009 at 11:00 AM, Ondrej Certik <ond...@certik.cz> wrote:
> Thanks for all the suggestions and thanks Vinzent for looking into
> that. Let's fix the findroot() in mpmath, e..g create a newton method,
> that only needs one point, close to the root and it converges by
> itself.

For a well-behaving function (spherical Bessel functions qualify),
the secant method with a small initial step size like 0.001
should converge and is faster than Newton's method,
provided that the starting point is good.

Fredrik

Vinzent Steinberg

unread,
May 17, 2009, 3:25:34 PM5/17/09
to sympy-...@googlegroups.com
2009/5/17 Fredrik Johansson <fredrik....@gmail.com>

Yeah, this is what I changed in my patch. 0.001 was too small, but 0.01 did the job.
Shouldn't this be the default instead of 0.25?

Vinzent

Fredrik Johansson

unread,
May 17, 2009, 3:36:03 PM5/17/09
to sympy-...@googlegroups.com
On Sun, May 17, 2009 at 12:25 PM, Vinzent Steinberg
<vinzent....@googlemail.com> wrote:
> Yeah, this is what I changed in my patch. 0.001 was too small, but 0.01 did
> the job. Shouldn't this be the default instead of 0.25?

There's a tradeoff, and there are examples where 0.01 fails too...

Fredrik

Vinzent Steinberg

unread,
May 17, 2009, 3:39:55 PM5/17/09
to sympy-...@googlegroups.com
2009/5/17 Fredrik Johansson <fredrik....@gmail.com>

What's the tradeoff? 0.25 seems somewhat large to me. Do you think findroot() should use Newton's method by default to avoid the problems Ondrej had?

Maybe there's some heuristic to choose a good stepsize for a given function.

Vinzent

Ondrej Certik

unread,
May 17, 2009, 5:06:10 PM5/17/09
to sympy-...@googlegroups.com
On Sun, May 17, 2009 at 12:39 PM, Vinzent Steinberg

<vinzent....@googlemail.com> wrote:
> 2009/5/17 Fredrik Johansson <fredrik....@gmail.com>
>>
>> On Sun, May 17, 2009 at 12:25 PM, Vinzent Steinberg
>> <vinzent....@googlemail.com> wrote:
>> > Yeah, this is what I changed in my patch. 0.001 was too small, but 0.01
>> > did
>> > the job. Shouldn't this be the default instead of 0.25?
>>
>> There's a tradeoff, and there are examples where 0.01 fails too...
>
> What's the tradeoff? 0.25 seems somewhat large to me. Do you think
> findroot() should use Newton's method by default to avoid the problems
> Ondrej had?
>
> Maybe there's some heuristic to choose a good stepsize for a given function.

I like that secant is faster.

What I don't like is that scipy works out of the box, while mpmath doesn't.

But I like Vinzent's patch, adding one parameter to findroot is a
reasonable tradeoff.

Ondrej

Vinzent Steinberg

unread,
May 18, 2009, 3:52:50 AM5/18/09
to sympy-...@googlegroups.com
2009/5/17 Ondrej Certik <ond...@certik.cz>


On Sun, May 17, 2009 at 12:39 PM, Vinzent Steinberg
<vinzent....@googlemail.com> wrote:
> 2009/5/17 Fredrik Johansson <fredrik....@gmail.com>
>>
>> On Sun, May 17, 2009 at 12:25 PM, Vinzent Steinberg
>> <vinzent....@googlemail.com> wrote:
>> > Yeah, this is what I changed in my patch. 0.001 was too small, but 0.01
>> > did
>> > the job. Shouldn't this be the default instead of 0.25?
>>
>> There's a tradeoff, and there are examples where 0.01 fails too...
>
> What's the tradeoff? 0.25 seems somewhat large to me. Do you think
> findroot() should use Newton's method by default to avoid the problems
> Ondrej had?
>
> Maybe there's some heuristic to choose a good stepsize for a given function.

I like that secant is faster.

What I don't like is that scipy works out of the box, while mpmath doesn't.

I just added Newton's method to mpmath svn, so it now should also work out of the box, if you are using method='newton'.

Vinzent

Reply all
Reply to author
Forward
0 new messages