please review my spherical bessel patches:
http://github.com/certik/sympy/tree/special
Thanks,
Ondrej
Priit :)
Ah, thanks for noticing:
+ method = "sympy": uses the scipy's sph_jn and newton to find all roots,
+ which if faster than method="sympy", but it requires scipy and only
this should be scipy.
Apart from that, are there any other objections?
Ondrej
Unfortunately, it didn't converge for me at all. Any ideas why? I'll
try to write my own newton method and then see if it works.
>
> Do you think some of the functions should be rather implemented in mpmath?
which functions do you mean exactly? The jn_zeros? I don't mind. But
mpmath is sometimes very slow, e.g. I want to be able to use scipy
optionally. It can be in mpmath, I have no problems with that. But
then you would need the jn() function too in there, but that one uses
sympy a lot, so I don't think it's that easy.
Ondrej
Bessel function zeros, along with a couple of other inverse functions,
will benefit greatly from the more generic high-level interface
being added to mpmath.
Basically, the first 16 digits can be computed entirely using machine
precision arithmetic, and then in the high precision case, mpmath can
use the machine precision estimate as the initial argument to
findroot.
Fredrik
That'd be cool -- but it will still be slower than scipy -- sometimes
I just need it fast, but it's ok that maybe the 10th digit is wrong.
So I want to have both.
How will you implement jn()? One can use the same thing as in scipy,
but I find it pretty cool that sympy can now (I mean after the patches
are accepted) find an exact formula for it.
Ondrej
Some mpmath functions have turned out to be faster than their
scipy counterparts when I reimplemented them for float. I don't know
what the case will be for Bessel functions.
> How will you implement jn()? One can use the same thing as in scipy,
> but I find it pretty cool that sympy can now (I mean after the patches
> are accepted) find an exact formula for it.
Numerically, there's no point implementing spherical Bessel functions using
any other formula than sqrt(pi/2/z)*besselj(v+0.5,z),
except adding some additional code to handle special cases.
By the way, jn() denotes an ordinary Bessel function in C99 math.h
and scipy. You might want to change the names.
Fredrik
Interesting.
>
>> How will you implement jn()? One can use the same thing as in scipy,
>> but I find it pretty cool that sympy can now (I mean after the patches
>> are accepted) find an exact formula for it.
>
> Numerically, there's no point implementing spherical Bessel functions using
> any other formula than sqrt(pi/2/z)*besselj(v+0.5,z),
> except adding some additional code to handle special cases.
That boils down to calling hyper() in mpmath/functions.py. The
question is whether that is faster than just evaluating one sin, one
cos and two simple polynomials.
e.g. is hyper comparably fast as for example sin? Or is it for example
3x slower?
>
> By the way, jn() denotes an ordinary Bessel function in C99 math.h
> and scipy. You might want to change the names.
Yeah, I was thinking about this. Actualli, C99 math.h doesn't have it,
only some extension:
http://en.wikipedia.org/wiki/Math.h#XSI_Extensions
sage uses spherical_bessel_J and bessel_J, mpmath uses besselj. I
thought we could use: Jn for besself functions and jn for spherical
bessel functions. scipy uses jn and sph_jn.
Ondrej
which functions do you mean exactly? The jn_zeros? I don't mind. But
>
> Do you think some of the functions should be rather implemented in mpmath?
mpmath is sometimes very slow, e.g. I want to be able to use scipy
optionally. It can be in mpmath, I have no problems with that. But
then you would need the jn() function too in there, but that one uses
sympy a lot, so I don't think it's that easy.
Just pull my branch and run tests. Then change the findroot code and you'll see.
Ondrej
It's quite a bit slower, but so is evaluating a polynomial.
The expanded formula is probably faster only for n <= N
where N = 2-3 or so.
mpmath.besselj could be improved to use faster series code
for half-integer arguments (as it does for integer arguments).
Then the benefits would immediately apply to both
besselj and spherical Bessel functions.
Fredrik
Thanks for all the suggestions and thanks Vinzent for looking into
that. Let's fix the findroot() in mpmath, e..g create a newton method,
that only needs one point, close to the root and it converges by
itself.
Your patch is an improvement and I think it's good enough for now.
But if scipy can do that, we should be able to do it too.
Ondrej
Thanks for all the suggestions and thanks Vinzent for looking into
that. Let's fix the findroot() in mpmath, e..g create a newton method,
that only needs one point, close to the root and it converges by
itself.
Your patch is an improvement and I think it's good enough for now.
But if scipy can do that, we should be able to do it too.
A standard technique is to search by halves until you get an intervalyou know you have a root in and switch to the secant, newton, or some
higher order method to polish the root. Secant might fail if there are
multiple roots in the interval determines by searching by halves, but
you will know immediately if that happens if any step of the secant
method goes outside the search by halves interval.
For a well-behaving function (spherical Bessel functions qualify),
the secant method with a small initial step size like 0.001
should converge and is faster than Newton's method,
provided that the starting point is good.
Fredrik
There's a tradeoff, and there are examples where 0.01 fails too...
Fredrik
I like that secant is faster.
What I don't like is that scipy works out of the box, while mpmath doesn't.
But I like Vinzent's patch, adding one parameter to findroot is a
reasonable tradeoff.
Ondrej
On Sun, May 17, 2009 at 12:39 PM, Vinzent Steinberg
<vinzent....@googlemail.com> wrote:I like that secant is faster.
> 2009/5/17 Fredrik Johansson <fredrik....@gmail.com>
>>
>> On Sun, May 17, 2009 at 12:25 PM, Vinzent Steinberg
>> <vinzent....@googlemail.com> wrote:
>> > Yeah, this is what I changed in my patch. 0.001 was too small, but 0.01
>> > did
>> > the job. Shouldn't this be the default instead of 0.25?
>>
>> There's a tradeoff, and there are examples where 0.01 fails too...
>
> What's the tradeoff? 0.25 seems somewhat large to me. Do you think
> findroot() should use Newton's method by default to avoid the problems
> Ondrej had?
>
> Maybe there's some heuristic to choose a good stepsize for a given function.
What I don't like is that scipy works out of the box, while mpmath doesn't.