Numerical methods testing

41 views
Skip to first unread message

materialisations

unread,
Dec 4, 2012, 1:01:19 AM12/4/12
to the-quanti...@googlegroups.com
Thanks for the feedback.  What I am trying to do, is to finish up my work on Akima interpolants, and then I'll push it to Github (once I get an account there), and look at pre-CPAN stuff, and ...  And then I can go back to other work which seemed to require Akima interpolants, and finish them, and so on.

Perl testing is all about does a module load, does a perl module execute a method, and so on.  My interest in computers from the beginning has been numerical methods, even though I have done lots of other stuff.  And while Perl/CPAN wants tests, if I am going to make tests, I want to make tests that show that some numerical method is producing accurate results.  And there is little in CPAN that has this orientation.

How should one test an interpolant?   Having a piecewise cubic interpolate a constant, linear, quadratic or cubic function should only have roundoff error.  Is a sine wave a reasonable measure of interpolation?  Just a single period?  Does a person try to find the MINIMAX error in interpolation?  Or should a person look  at the error in integrating some function (like sine(X)) over 1 period?  Or does one pick something like the predator-prey problem, and look at finding the minimax error, or integral over a period?

I don't think there is a Runge-Kutta-Fehlberg in CPAN (for a predator-prey), I would have to dust off yet more stuff from when I only did FORTRAN.

But, looking around CPAN, how does one "test" numerical methods?  Do you write a special version of your code using Number::WithError, and run data through that, to come up with an expected range for results?''

Sorry for asking this kind of stuff now.  I probably should have been asking this back in 1986.

Have a great day!
Gord

Jonathan "Duke" Leto

unread,
Dec 4, 2012, 3:02:14 AM12/4/12
to the-quanti...@googlegroups.com, ghav...@materialisations.com
Howdy,

Perl testing is all about does a module load, does a perl module execute a method, and so on.  My interest in computers from the beginning has been numerical methods, even though I have done lots of other stuff.  And while Perl/CPAN wants tests, if I am going to make tests, I want to make tests that show that some numerical method is producing accurate results.  And there is little in CPAN that has this orientation.

Here are some tests from Math::ODE, a CPAN module that I released in 2000, that verifies that the maximum error at any point in the approximated solution is less than O(h^4), because it uses a 4th-order Runge-Kutta method:

https://github.com/leto/math--ode/blob/master/t/odes.t
https://github.com/leto/math--ode/blob/master/t/ode_systems.t
 

How should one test an interpolant?   Having a piecewise cubic interpolate a constant, linear, quadratic or cubic function should only have roundoff error.  Is a sine wave a reasonable measure of interpolation?  Just a single period?  Does a person try to find the MINIMAX error in interpolation?  Or should a person look  at the error in integrating some function (like sine(X)) over 1 period?  Or does one pick something like the predator-prey problem, and look at finding the minimax error, or integral over a period?

There are various measures of local and global error. I would recommend implementing the simplest error measures first and adding more as needed. A good start would be to emulate the Math::ODE tests above and verify that the error at each node of the interpolant is below what is expected.


I don't think there is a Runge-Kutta-Fehlberg in CPAN (for a predator-prey), I would have to dust off yet more stuff from when I only did FORTRAN.

Kind of. There is Math::GSL::ODEIV, which provides an interface to GSL which has an RKF45:

http://search.cpan.org/dist/Math-GSL/lib/Math/GSL/ODEIV.pm

But the catch is that it doesn't quite work. But if you wanted to use it and help make it work, I am sure we could get something functional.

Then there is Math::ODE : https://metacpan.org/module/Math::ODE

It a pure-Perl implementation of RK4, which is quite a large subset of RKF45. The interface could be extended nicely by using RKF45 if no step size is given or if the RKF45 algorithm was specified. I would very much enjoy a patch that adds RKF45 to Math::ODE.

Going the Math::GSL route means platform-specific issues but the most performance, while the Math::ODE eschews performance in the name of portability. Pick your poison :)

Also, Math::GSL::Interp might already do the Akima interpolation that you want:

https://metacpan.org/module/Math::GSL::Interp

but that is untested code. Have the appropriate amount of fun!

Duke

PS: Here are some more examples of numerical tests in Perl from Math::GSL::Interp :

https://github.com/leto/math--gsl/blob/master/t/Interp.t




--
Jonathan "Duke" Leto <jona...@leto.net>
Leto Labs LLC http://labs.leto.net
209.691.DUKE http://dukeleto.pl

David Mertens

unread,
Dec 4, 2012, 9:44:24 AM12/4/12
to the-quanti...@googlegroups.com

Gord -

I don't feel there's quite as much difference between your testing needs and most modules' testing needs. Your tests should demonstrate that your code works "as advertised." The tough question is, "What is your module advertising?"

First, your module will make certain guarantees about what sorts of data it expects and how it responds when the user provides something that doesn't make sense. For example, you probably expect an array of numbers as input. How does your module respond if I send it a hashref? Ideally it should do it's best for as many kinds of input as possible: this is Perl, after all.

But the other thing that you advertise is that the module does Akima interpolation. What does this mean? It means that the interpolation error has certain mathematical or statistical properties. Mathematicians and computer scientists have hopefully worked these out for you already, so your job is to verify, statistically, that your module's numerical error hits those targets. A good computer scientist should have taken floating point rounding into account for "perfect" interpolation, and your module should agree.

For all of this, Test::LectroTest can be very helpful, though an extension that allows statistically insignificant amounts of failure would be better. I would offer to help write such an extension, but I'm swamped lately. Likely other modules in the Stats namespace will useful, too.

Hope this helps!

David

--
 
 
 

materialisations

unread,
Dec 5, 2012, 9:20:24 PM12/5/12
to the-quanti...@googlegroups.com
I think I was a C-MU working on my M.Eng. when Intel released some chip that had a bug in the floating point.  It escapes me now what the bug was.  I have read about new hardware which only does single precision IEEE floating point.  Providing tolerances for double precision isn't going to work well there.  Having played with ImageBuilder for OpenWRT, I was really surprised at some of the applications that people had ported to routers.

But I don't think just testing the modules load and similar was enough for numerical methods stuff.  Because I think the hardware (or software emulation of) may not do math the way we are assuming.  I hadn't been putting code in my modules to check that data was numeric, I suppose I should.  Thanks.

After a bit of searching, it seems the original Akima (1970) interpolants are expected to have errors of O(h**2).  So a person can check that.   I think corners in the data (which came along after Akima's original work) might have some influence on this, and I have yet to see an error analysis for data with corners.  As far as I know, the Math::GSL family does the original Akima interpolation, except with corners.  Not many comments in that GSL source code.  My module has 4 different Akima interpolants, and currently just one boundary condition.  It will do corners or not.  I think I may have 1 or 2 other boundary conditions available.  One of those being Not-a-Knot.  With cubic splines, NaK can change the interpolation error for O(h**2) to O(h**4).    I don't know yet what happens with my code.

As far as advertising goes, I hadn't thought about that.  I know that linear interpolation of points in using Brent's method to find a minimum (which is why I started looking at Akima interpolation) doesn't help in accelerating convergence in some "bad" minima.  Akima interpolation seemed to have properties which I thought would be an improvement on linear interpolation.  I haven't gotten around to trying the Akima code I now have, on the Brent's problem.  I just thought that if I had to implement Akima interpolation for this other thing, I might as well do it "properly" and give it to everyone.

My thought was that a good CPAN name would be Math::Interp2D::Akima.  2D because Y=f(x).  But, in the literature, one can often find this kind of interpolation being described as 1D.  Any opinions on that?

With respect to the RKF (ODEIV), I have background to help, but at the moment no time.  I need to get this other stuff finished.  And I am still hoping to see either some miracle from Jonathon, or upgrades on the Debian side, to allow Math::GSL to work on my machine.

Can I send someone some snow?  We have an abundance here in NW Alberta, Canada.  Some days I don't even go to the gym, as there is too much to shovel.  Have a good holiday season people.

Reply all
Reply to author
Forward
0 new messages