ODE solver draft

295 views
Skip to first unread message

Timur Zaripov

unread,
May 12, 2015, 1:40:50 PM5/12/15
to gonu...@googlegroups.com
Good evening, everyone,

I would like to share my first draft of ODE solver library which currently only consist of two Runge-Kutta methods implementations with one test and benchmark each.

My goal at the moment is to get better understanding of how interface of the library should look like, what methods it should include, and how to test solvers. Any code review would be greatly appreciated as well as "real world" use case examples.

Current design philosophy is borrowed from example at
and looks like this:
NewSolverStepXXX functions construct stepping functions (SolverStep or AdaptiveSolverStep signatures) from given
function for right side of system (has YPrime signature) and solver
parameters (passed as SolverPrm interface).
Stepping functions perform single integration step for given time t,
solution vector y and time step dt and return new values of t, y and
possibly dt.

Usage example can be seen in ode_test.go

Current issues:
-- Parameters, that are required by yprime, are passed without any check as part of a structure, that implements SolverPrm interface, and that's not good.
-- I don't have clear testing strategy aside from solving number of problems which exact solutions are known.
-- I mostly use this solvers in my own small projects, which means I do not yet understand user needs in more general case. 
-- Methods are implemented in Go which might not be the best approach. For example, Scipy project wraps around mature implementation from http://www.netlib.org/ode/. Optimization can wait until interface stabilizes though.

I believe interface and tests should be main priority at the beginning and that's what I plan to focus on. 

And so I humbly ask for your insight and advice.

With best regards,
Timur Zaripov.

Dan Kortschak

unread,
May 12, 2015, 5:05:23 PM5/12/15
to Timur Zaripov, gonu...@googlegroups.com
Hi Timur and welcome.

I'll leave the discussion of design here to Brendan and Vladimir, but I would like to clarify something wrt the degree the code is based on the rosetta code example.

I ask because RC is published under the GNU FDL, and I'm not entirely clear how that plays with the gonum license. If by based you mean, "I read the article and wrote this," I think the issue is moot, but I am unclear of the implications if the the code you have is a derivative work or the example (for example what is fair use in this case).

thanks
Dan

Timur Zaripov

unread,
May 13, 2015, 6:55:05 AM5/13/15
to Dan Kortschak, gonu...@googlegroups.com
Hello Dan,

The draft includes no code from RC, but the idea of stepper function that encloses yprime comes from RC example.

I did some research on ODE testing examples and found three good collections of testing problems  

For now I will probably go through problems, adding them to test one by one, and see what demands and requirements it will put on the library.

With best regards,
Timur Zaripov.

Brendan Tracey

unread,
May 13, 2015, 12:35:48 PM5/13/15
to gonu...@googlegroups.com, zar...@gmail.com, dan.ko...@adelaide.edu.au
Welcome Timur

I have used ODE solvers in the past, though I don't have a personal vendetta on how such a package should be designed. It is worth looking at existing ODE solvers to see what sorts of features seem to be needed. Obviously Matlab has ode23 and ode45 and scipy has an ODE suite, but you should also look at ODEPACK to see what it supports. At Gonum, we try to find the suite spot of usability. We want to enable the same cost-saving features one can have in FORTRAN (good memory usage, interesting matrix types, etc.), while trying to present a nice entry to the package (at least in the simple case, but ideally in the complex case as well).

Off the top of my head, there are a couple key features of ODE routines
- Time integration scheme:  Explicit schemes, implicit schemes, and multistage schemes
- Boundary Conditions: Neumann and Dirichlet at t = 0 are the easiest, but meeting other BCs can require shooting methods, etc.
- Adaptive vs. Nonadaptive: Do you have a consistent time step, or does the time step change to meet some local accuracy tolerance

A good package will find a way to compose these pieces together. Rather than say "Our ODE solver is an RK4 that handles these three boundary conditions, and is adaptive in some cases", it would be much better to have something like

ode.Integrate(ode.System, ode.Scheme, ode.BCs, ode.Adaptive)

This way, if we implement a new scheme, say, the leapfrog method, we don't need to explicitly code all of the other pieces, it falls out from composition.

I haven't coded an ODE solver, so I don't know if such a complete decoupling is possible, especially if we want to maintain performance. I do think that's the direction we should be aiming rather than a single implementation like ode45. For a sense of what I mean, see diff/fd, where the "stencil" of differentiation is separate from how it's used. Rather than have a "Central Difference" function and a "Forward Difference" function that each evaluate the function and take derivatives, there's one central code that takes the stencil definition. ODEs are significantly more complicated than finite difference, but it seems like an ODE stencil may be abstractable in a similar fashion. 

Timur Zaripov

unread,
May 13, 2015, 4:30:35 PM5/13/15
to Dan Kortschak, gonu...@googlegroups.com
Good evening everyone,

Today I added first problem from testing set at https://www.dm.uniba.it/~testset/.
That is HIRES problem. 
There is now two additional functions to check if scalars or array are equal within tolerance, which makes testing function look quite a bit cleaner.

While implementing this test I got a feeling that stepper function interface might be exposing too much. At the moment I am torn between current implementation that puts responsibility of tracking t, y and dt on user and Solver.Step() that hides t, y and dt inside.
Upside of current approach is that it is simple and gives user full control over his solution.

I will probably implement Solver.Step() just to see how it feels and flows.

With best regards,
Timur Zaripov.

Vladimír Chalupecký

unread,
May 13, 2015, 10:09:25 PM5/13/15
to gonu...@googlegroups.com, Brendan Tracey, dan.ko...@adelaide.edu.au
Hi Timur, welcome to gonum.

I have been thinking about an ode package for some time now but have not yet come to a nice presentable design. I will try to give my comments and share my thoughts below anyway. Thanks for your patience in advance, I tend to be verbose.

I think that first of all we should make clear what kind of problems from the mathematical point of view we want to solve in the ode package. Brendan's mention of boundary conditions suggests that he's also considering boundary-value problems, but I think that the most useful and most natural would be to first focus on initial-value problems for systems of ODE's. Given our limited manpower that could entertain us for quite a while.

In particular, like Matlab's ode45() and other packages, I propose we should start with methods for finding an approximate solution to initial-value problems
y'(t) = f(t, y(t)),   t0 < t <= tf,
y(t0) = y0,
where f : R x R^n -> R^n and y0 \in R^n. Higher-order ODE's can be rewritten as a system of first-order ODE's straightforwadly.

From the above it falls out that the entry function signature could look like:

ode.Integrate(sys ode.System, y0 []float64, t0, tf float64, s *ode.Settings, m ode.Stepper) // If s or m are nil, use defaults.

Yes, I am biased by optimize :-), but I really like its Local()'s signature.

System interface could look like:

type System struct {
Func func(t float64, y, yDot []float64) // Right-hand side of the system. Must be non-nil.
Jacobian func(t float64, y []float64, jac mat64.Matrix) // Optional, in a distant future implicit solvers would require this to be non-nil.
}

Stepper interface is the most important, but unfortunately also the most unclear element. It largely depends on what kind of integrators we want to have: fixed-step or variable-step with local error control integrators? I would argue for going directly for variable-step solvers. It is more work (with fixed-step solvers we could learn how to design the Stepper interface), but I cannot imagine solving a general IVP problem reliably with a fixed step size. Moreover, it takes from the user the burden of determining the right step size (with adaptivity we can provide a reasonable defaults for min and max step size and vary the step depending on the problem. But the user should be able to give us a hint on the initial step size in case we fail to guess it well). Anyway, Stepper's responsibility should be just take the solution from the current time point to the next one and give me an error estimate if we took that step. It could look something like:

type Stepper interface {
Init(y0 []float64) // Allocate and reuse memory, initialize solver's parameters, etc.
 
// Try to take a dt step from y(t) and store the result in yNext.
Step(sys System, t, dt float64, y, yNext []float64, ???) (???) // Not sure about details of this.

ErrorEstimate(e []float64) float64 // Not sure about details of this either.
}

The way of converting the error estimate into a step size to be attempted should also be an interface (configurable in Settings?).

The snippets above are similar and slightly inspired by the interface of C++'s boost::numeric::odeint library and its integrate() functions. Their use of templates is similar to our use of interfaces.

As for the stencils, explicit and implicit Runge-Kutta methods can be represented by a kind of stencil called the Butcher array (or tableau), but naturally there are methods than cannot be expressed so conveniently. Still, it would be probably a good idea to start with implementing a generic Runge-Kutta stepper whose configurable part is only the Butcher array (plus possibly bools indicating if a given variant is so-called First Same As Last, see e.g. http://en.wikipedia.org/wiki/Dormand-Prince_method, or if local exptrapolation is used or not). Of course, if we go for variable-step only intergrators, then the set of available Runge-Kutta methods becomes limited.

Another important functionality in my opinion is the output of the solution. Something similar to optimize.Recorder would work well, but in ode Recorder will more important than in optimize and we should provide more ready-to-use implementations. There are various options, at least two: 1) dense output that outputs the solution at every step and 2) output at constant step sizes (either given or determined by (interval length)/number of steps). This second option means keeping some history in Recorder and doing interpolation because the integrator steps and output steps will not coincide.

Finally, some unsorted remarks:

* A good test suite will be important, there are surely widely used and accepted test problems.
* We want to reuse memory as much as possible. For example, I think that it is very common to solve the same problem many times with varying initial data. Reusing the solver without allocating new memory is important in such a case and but having the solver as a function closure precludes it.
* Let's first aim for explicit solvers for non-stiff problems as you are now doing, for example the above-linked Dormand-Prince Runge-Kutta method. We will most likely not match Matlab's ode45() in terms of robustness (and speed?), there are so many subtle implementation details, but something that works acceptably well for a wide range of cases should not be that difficult. A good set of test problems will tell us how good we are and where are the limits of the implementation.
* We will also want to collect and return some statistics, like the number of function evaluations, number of successful and rejected steps, etc.

Any comments?

Cheers,

Vladimir

Brendan Tracey

unread,
May 13, 2015, 10:31:05 PM5/13/15
to gonu...@googlegroups.com, vladimir....@gmail.com, dan.ko...@adelaide.edu.au, tracey....@gmail.com


I think that first of all we should make clear what kind of problems from the mathematical point of view we want to solve in the ode package. Brendan's mention of boundary conditions suggests that he's also considering boundary-value problems, but I think that the most useful and most natural would be to first focus on initial-value problems for systems of ODE's. Given our limited manpower that could entertain us for quite a while.

As an additional note, things like shooting methods can be implemented on top of an initial value problem.
 

Stepper interface is the most important, but unfortunately also the most unclear element. It largely depends on what kind of integrators we want to have: fixed-step or variable-step with local error control integrators? I would argue for going directly for variable-step solvers.

I also think it is quite reasonable to have multiple entry points to the package. The advantage to non-adaptive methods is that their computation time is very predictable. Possibly we could have an ode.Adaptive(stuff) and ode.Fixed(stuff) , where ode.Integrate does something reasonable. I'm not making any decisions, but I don't think we should feel forced to have only one entry point (though, I do think the most obvious thing should do the  thing that is normally right)
 


As for the stencils, explicit and implicit Runge-Kutta methods can be represented by a kind of stencil called the Butcher array (or tableau), but naturally there are methods than cannot be expressed so conveniently. Still, it would be probably a good idea to start with implementing a generic Runge-Kutta stepper whose configurable part is only the Butcher array (plus possibly bools indicating if a given variant is so-called First Same As Last, see e.g. http://en.wikipedia.org/wiki/Dormand-Prince_method, or if local exptrapolation is used or not). Of course, if we go for variable-step only intergrators, then the set of available Runge-Kutta methods becomes limited.


Many of the stencils are of the form  x^{i+1} = alpha x^{i-1} + beta x^{i}, or whatever. In Go, we have the advantage over something like python or matlab in that it could be really easy to express a banded matrix. We have the BLAS routines implemented, and we can add a Banded matrix type to mat64 if there's a reason for it. This would save significantly on storage and computation costs over mat64.Dense. For implicit solves we would have two.

Actually, maybe they aren't needed for ODE, but if we wanted to have a solver for boundary values on both sides it's available.

Vladimír Chalupecký

unread,
May 13, 2015, 11:30:58 PM5/13/15
to gonu...@googlegroups.com, vladimir....@gmail.com, dan.ko...@adelaide.edu.au
Stepper interface is the most important, but unfortunately also the most unclear element. It largely depends on what kind of integrators we want to have: fixed-step or variable-step with local error control integrators? I would argue for going directly for variable-step solvers.

I also think it is quite reasonable to have multiple entry points to the package. The advantage to non-adaptive methods is that their computation time is very predictable. Possibly we could have an ode.Adaptive(stuff) and ode.Fixed(stuff) , where ode.Integrate does something reasonable. I'm not making any decisions, but I don't think we should feel forced to have only one entry point (though, I do think the most obvious thing should do the  thing that is normally right)

Sure, having ode.IntegrateAdaptive(), ode.IntegrateFixed() and ode.Integrate() is also fine with me.

There will be some (maybe moot) issues with the fixed-time integration: If we promise to always use given constant step size, we may either not be able to integrate up to tFinal without going past it (the user may not wish evaluation of f beyond tFinal, e.g., there might be a discontinuity) or we can integrate beyond tFinal, but then we must interpolate to get y(tFinal). That is, in this case there is an additional element of interpolation that participates in the integration (possibly inevitable for some integrators).

As for the stencils, explicit and implicit Runge-Kutta methods can be represented by a kind of stencil called the Butcher array (or tableau), but naturally there are methods than cannot be expressed so conveniently. Still, it would be probably a good idea to start with implementing a generic Runge-Kutta stepper whose configurable part is only the Butcher array (plus possibly bools indicating if a given variant is so-called First Same As Last, see e.g. http://en.wikipedia.org/wiki/Dormand-Prince_method, or if local exptrapolation is used or not). Of course, if we go for variable-step only intergrators, then the set of available Runge-Kutta methods becomes limited.

Many of the stencils are of the form  x^{i+1} = alpha x^{i-1} + beta x^{i}, or whatever. In Go, we have the advantage over something like python or matlab in that it could be really easy to express a banded matrix. We have the BLAS routines implemented, and we can add a Banded matrix type to mat64 if there's a reason for it. This would save significantly on storage and computation costs over mat64.Dense. For implicit solves we would have two.

Just to confirm: Is your x my y? And the upper index refers to time level? If I understand you correctly, you are here considering rather a boundary-value problem for an ODE, aren't you?
 

Actually, maybe they aren't needed for ODE, but if we wanted to have a solver for boundary values on both sides it's available.
 
Ah, you probably are. Anyway, I would begin by focusing on initial-value problems.

Vladimír Chalupecký

unread,
May 14, 2015, 1:07:42 AM5/14/15
to gonu...@googlegroups.com, vladimir....@gmail.com, dan.ko...@adelaide.edu.au
Many of the stencils are of the form  x^{i+1} = alpha x^{i-1} + beta x^{i}, or whatever. In Go, we have the advantage over something like python or matlab in that it could be really easy to express a banded matrix. We have the BLAS routines implemented, and we can add a Banded matrix type to mat64 if there's a reason for it. This would save significantly on storage and computation costs over mat64.Dense. For implicit solves we would have two.

I was inevitably thinking about this over lunch and now I (partly) understand. By the formula "x^{i+1} = alpha x^{i-1} + beta x^{i} or whatever" you probably refer to (two-step) linear multi-step methods:
x_{i+2} + alpha_1 x_{i+1} + alpha_2 x_{i} = step * (beta_2 f_{i+2} + beta_1 f_{i+1} + beta_0 f_{i}),
implicit in general, explicit if beta_2 = 0. Yes, it would nice to have them too.

Sonia Keys

unread,
May 14, 2015, 6:56:12 AM5/14/15
to gonu...@googlegroups.com, dan.ko...@adelaide.edu.au
Hi people,

I have a mostly historical note, I wrote the RC Go example.  I have no experience actually using ODE solvers so I'm afraid I don't have much else to contribute to the conversation here.

I actually think the FDL is good for RC.  Because of the way the site works, the code there is good for ideas but hardly ever library quality.

Dan Kortschak

unread,
May 14, 2015, 7:59:49 AM5/14/15
to Sonia Keys, gonu...@googlegroups.com
On Thu, 2015-05-14 at 03:56 -0700, Sonia Keys wrote:
> I actually think the FDL is good for RC. Because of the way the site
> works, the code there is good for ideas but hardly ever library
> quality.

I agree. Though it does have the potential to complicate things if
someone wants to get finicky.

Timur Zaripov

unread,
May 20, 2015, 10:02:21 AM5/20/15
to gonu...@googlegroups.com, dan.ko...@adelaide.edu.au, tracey....@gmail.com
Hello everyone,

Today I checked web interface of this group to realize a mistake I made. It seems, that I have somehow missed all messages except the one from Dan at the beginning of the thread. Not sure if Inbox bundled them wrongly or I messed up with subscription in some way. 
In any case, I am terribly sorry for this communication fail and will be more careful with groups from now on.

I will try to comprehend all the points made so far and respond on them later today.

For now I want to take a quick look at the work, that I have done since May, 13th, which unfortunately does not reflect on discussion that I've missed. Everything I write about below should make it's way to github today, when I clear all the tests.

I took scipy.ode interface as reference example and went with following design.

There is a configuration struct for setting up calculation:
type Problem struct {
 
// required parameters
 YP
YPrime    // function to compute right side of the system
 Y0
[]float64 // initial values
 T  
[]float64 // time points to find solution at, T[0] = t0, T[len(T)-1:] = tEnd, len(t) >= 2


 
// parameters with default values
 
Dt0    float64 // initial time step
 
ATol   float64 // absolute error tolerance
 
RTol   float64 // relative error tolerance
 
Safety float64 // safety factor on new step selection
}


Such struct is easy to extend without breaking interfaces that rely on it. For example, it is easy to add field for Jacobian or SolverType.

Then I define two methods on Problem:
func (p *Problem) SanityCheck() error
to validate given values;

func (p Problem) Solve() ([][]float64, error)
to find a solution of given IVP.

Solve() performs SanityCheck() and then uses stepping function with signature
type solverStep func(t float64, y []float64, dt float64) (float64, []float64, float64)


that is created by 
func NewSolverStepRKM(yp YPrime, nEq int, p Problem) solverStep
or
func NewSolverStepRK4(yp YPrime, nEq int) solverStep

Both stepper creation functions are exported in case client wants more control over the calculation process.

Way before initial commit I made an attempt on flexible NewSolverStepRK design, which used Butcher array, it looked like this: 
func newRK4Step(yp ypFunc, prm prm) ypStepFunc {
 
// Runge-Kutta matrix
 a
:= [4][3]float64{
 
{0, 0, 0},
 
{0.5, 0, 0},
 
{0, 0.5, 0},
 
{0, 0, 1.},
 
}
 
// weigths
 b
:= [4]float64{(1. / 6.), (1. / 3.), (1. / 3.), (1. / 6.)}
 
// nodes
 c
:= [4]float64{0, 0.5, 0.5, 1.}


 
// we only want to allocate memory for increments once to avoid excessive gc
 dy
:= make([][]float64, len(c))
 
for i := range c {
 dy
[i] = make([]float64, prm.nEq)
 
}


 solverStep
:=
 func
(t float64, y []float64, dt float64) (float64, []float64) {
 copy
(dy[0], y)
 dy
[0] = yp(t, dy[0], prm)
 
for i := 1; i < len(c); i++ {
 copy
(dy[i], y)
 
for iEq := range y {
 
for j := 0; j < i; j++ {
 dy
[i][iEq] += a[i][j] * dy[j][iEq] * dt
 
}
 
}
 dy
[i] = yp(t+dt*c[i], dy[i], prm)
 
}


 
for iEq := range y {
 
for i := range b {
 y
[iEq] += dt * dy[i][iEq] * b[i]
 
}
 
}


 
return t + dt, y
 
}
 
return solverStep
}


Such an approach proved to be very inefficient compared to current direct and "naive" implementation. I will add separate benchmark to illustrate that. Hopefully today.

With best regards,
Timur Zaripov.

Timur Zaripov

unread,
May 20, 2015, 11:30:24 AM5/20/15
to gonu...@googlegroups.com, dan.ko...@adelaide.edu.au, tracey....@gmail.com
I have pushed changes, that were described earlier, to github.

Now it should be easier to see what I was talking about. Benchmarks of Butcher table implementations for RK4 and RKM methods included. Unless I am missing something, naive implementations are 1.5x-2x faster.

With best regards,
Timur Zaripov.
...

Timur Zaripov

unread,
May 20, 2015, 1:20:54 PM5/20/15
to gonu...@googlegroups.com, dan.ko...@adelaide.edu.au, tracey....@gmail.com
Ok, I will try to respond on point made in thread so far.

To start I want to elaborate on reasons that led me to current design. Those are mainly maintainability reasons - it is significantly easier to extend Problem structure and adjust SanityCheck() without breaking client code rather than change functions with long list of parameters everywhere.
Downside is that now client has to set up ode.Problem structure and call Problem.Solve(), rather than call ode.Solve(<list of parameters>).

Personally I prefer struct approach, but it is subject for discussion of course.

Domain.
Initial value problems for now. That's what I have some experience with. Also algorithms are a bit easier. Good starting point.

Adaptive vs Non-Adaptive step.
It doesn't appear to be a real issue to me so far. It can be an option in Problem struct, that is handled by Solve(), or client can take full control over the process using provided NewSolverStepXXX() directly.
Solve() already takes care of adjusting time step to provide solution at exact times provided by Problem.T.

Time integration scheme.
I believe, features should be dictated by use cases. When there is use case (test problem) that explicitly require different time integration scheme then implementation will follow naturally. 

Stepper interface.
As of now it is 
type solverStep func(t float64, y []float64, dt float64) (float64, []float64, float64)
Output parameters are in the same order as input ones. As signature suggests, dt might be changed by stepping function.

Output of the solution. 
For now Solve() returns slice of y values at given time stamps p.T. It naturally covers use case that requires values at constant step sizes.
I think, that option to provide dense output of y values at every time step is important, but might be tricky to implement in a safe and clean way, considering current adaptive stepper implementation: RKM stepper may or may not update t and y in a single call. It always updates dt though.
I will look at Recorder implementation and think about it more.

Reusing the solver without allocating new memory.
Main problem I see is that it might make solver unsafe for concurrent use. Current implementation is unsafe either, but if it will ever be, solvers should have their own copies of everything.
I believe that this optimization might wait until overall design is stabilized.

I wrote about performance of Butcher array implementation before. It might be faster if I make use of gonum/matrix package. I will look at it.

To sum up:
For now I am going to continue with expanding test suit (which can be reused even if my other efforts will be fruitless), will try to come up with the way to make Solve() concurrency safe, and will implement Dormand–Prince method.

With best regards,
Timur Zaripov.

Vladimír Chalupecký

unread,
May 21, 2015, 1:52:40 AM5/21/15
to gonu...@googlegroups.com, tracey....@gmail.com, dan.ko...@adelaide.edu.au
Hi,

Just a couple of quick comments.


Downside is that now client has to set up ode.Problem structure and call Problem.Solve(), rather than call ode.Solve(<list of parameters>).

Having a Problem struct that contains the right-hand side function, start and end times and the initial condition is good. But I think that the T slice, the tolerances and the other parameters do not belong there, they are not part of the problem statement.

Solve() should not be a method on Problem, rather a function taking Problem as an argument. I would prefer to call it Integrate rather than Solve: ode.Integrate(Problem, Settings, Stepper, ...).

Solve() returning the whole integration history is overkill, it should really be done via some kind of recorders. It may then save the whole history in memory if that is the indeed the desired behavior, or just dump it to the disk, or anything else.

I don't think Solver is the proper interface to define. A (family of) Stepper interface(s) seems to be the key concept to me.
 
Adaptive vs Non-Adaptive step.
It doesn't appear to be a real issue to me so far. It can be an option in Problem struct, that is handled by Solve(), or client can take full control over the process using provided NewSolverStepXXX() directly.
Solve() already takes care of adjusting time step to provide solution at exact times provided by Problem.T.

I think it is a real issue. Not every stepper can provide an error estimate so not every stepper can be used for adaptive time-stepping. What you are doing is to return the next step size from the stepper, so every stepper able to adapt time step 1) has to reimplement the adaptive logic 2) the logic cannot be configured or replaced with another one 3) the stepper cannot be used for non-adaptive time-stepping. I think that it would be better if a stepper did just what the name implies: to step from one time to the next. And those steppers that are able to could also return an error estimate that can be used externally (e.g., by an Integrate() function) to adapt the time step (and the adaptivity can be another interface).
 
Reusing the solver without allocating new memory.
Main problem I see is that it might make solver unsafe for concurrent use. Current implementation is unsafe either, but if it will ever be, solvers should have their own copies of everything.
I believe that this optimization might wait until overall design is stabilized.

By reuse I didn't mean accessing one solver from various goroutines, I meant reusing the same solver for different integrations, e.g. each with a different initial conditions or even for a completely different problem of smaller, same or higher dimension than the previous one. That means that I should be able to tell the solver to initialize/reset itself while reusing the memory that it has already allocated (and reallocate only if the memory is not large enough).
 
I wrote about performance of Butcher array implementation before. It might be faster if I make use of gonum/matrix package. I will look at it.
 
Once the right-hand side of the system becomes expensive to evaluate, the cost of adding together a couple of slices does not really matter. Anyway, in your case the difference could probably be eliminated by reordering and rewriting the for loops and by not executing them when the coefficients (e.g., a) are zero. But concrete solver implementations are not the main issue right now, identifying and defining the API in terms of interfaces and functions is.

I will try to write an outline of how I think that the package could look like ... I mean, when I have time, that is.

Cheers,

Vladimir

Timur Zaripov

unread,
May 22, 2015, 12:36:47 PM5/22/15
to Vladimír Chalupecký, gonu...@googlegroups.com, tracey....@gmail.com, dan.ko...@adelaide.edu.au
Hi

Having a Problem struct that contains the right-hand side function, start and end times and the initial condition is good. But I think that the T slice, the tolerances and the other parameters do not belong there, they are not part of the problem statement.
I agree. It's just that I want code to evolve to a point when logical thing to do will become practical thing to do. So far I couldn't articulate good reason (use case) to split the Problem struct with a benefit.
 
Solve() should not be a method on Problem, rather a function taking Problem as an argument. I would prefer to call it Integrate rather than Solve: ode.Integrate(Problem, Settings, Stepper, ...).
Solve and Integrate could be different level functions, for example:
Solve - does everything with defaults and minimal setup;
Integrate - for fine control of used components;
Steppers - for full control.  

Solve() returning the whole integration history is overkill, it should really be done via some kind of recorders. It may then save the whole history in memory if that is the indeed the desired behavior, or just dump it to the disk, or anything else.
Agree on this too. Still need to understand Recorder though. Takes time.
 
I think it is a real issue. Not every stepper can provide an error estimate so not every stepper can be used for adaptive time-stepping. What you are doing is to return the next step size from the stepper, so every stepper able to adapt time step 1) has to reimplement the adaptive logic 2) the logic cannot be configured or replaced with another one 3) the stepper cannot be used for non-adaptive time-stepping. I think that it would be better if a stepper did just what the name implies: to step from one time to the next. And those steppers that are able to could also return an error estimate that can be used externally (e.g., by an Integrate() function) to adapt the time step (and the adaptivity can be another interface).
I am starting to understand your point, but I think it will only become obvious to me, when I will implement at least two different adapting schemes. Then required shape will be easier to recognise.
 
Once the right-hand side of the system becomes expensive to evaluate, the cost of adding together a couple of slices does not really matter.
For the problems I solve it does matter. I do Monte-Carlo simulations of aerosol particles in porous media, and while system for each particle is very simple, amount of particles to track is huge.
What I am trying to say, is that there are applications, where stepper could be the bottleneck.
--
You received this message because you are subscribed to the Google Groups "gonum-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gonum-dev+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Timur Zaripov

unread,
May 26, 2015, 11:02:40 AM5/26/15
to Vladimír Chalupecký, gonu...@googlegroups.com, tracey....@gmail.com, dan.ko...@adelaide.edu.au
Good evening,

I reworked existing API to fit Stepper+Adapter design, that was proposed by Vladimir. Got rid of benchmark and left only 2 tests for now to make iterating over API easier.

Problem and Parameters are now separate structures.
type Stepper interface {
Init(p *Problem, prm *Parameters)
Step(t float64, y []float64, dt float64) (float64, []float64, float64)
}

type Adapter interface {
Init(p *Problem, prm *Parameters)
Adapt(dt, e float64) float64
}

Unfortunate consequence of current implementation is that it becomes client responsibility to run something like
s := new(RKM)
a := new(BasicAdaptor)
(which I left inside Solve() for now.)

I am not sure if the code looks any good, since I have no use case in mind.

As for fixed vs adaptive step and steppers that can not provide error estimate, I think simple convention could do for now - if error estimate is zero or below, Adapter will be skipped.

With best regards,
Timur Zaripov.

Timur Zaripov

unread,
Jun 5, 2015, 8:43:41 AM6/5/15
to Brendan Tracey, gonu...@googlegroups.com
Good afternoon everyone,

I apologize for disappearing. I was starting at new job in a different country, and that did affect my free time quite significantly.

Which now gives me a good opportunity to review the code as if wasn’t written by me^^

--
With best regards,
Timur Zaripov.

> On 26 May 2015, at 16:20, Brendan Tracey <tracey....@gmail.com> wrote:
>
> Hi,
>
> Thanks for the work you’ve been putting in. I’m happy we’re making progress to an ODE solver. Sorry I have not participated in the discussion more. It’s a pretty busy time for me at the moment, and it takes a while to do constructive criticism on an API design. I hope in a week or two to have more time.
>
> Brendan

Vladimír Chalupecký

unread,
Aug 25, 2015, 1:30:44 AM8/25/15
to gonum-dev, tracey....@gmail.com
Hi everyone,

It took me a while, but finally I got my version of an ode package to a partly presentable state. It is located at:

I am almost happy with it, but it is still a work in progress - what functionality, what API for it and how to deal with the floating point effects. My main doubt is whether a reverse communication interface should be used like we have in optimize, but I have not come to a definitive conclusion.

What is implemented:
  • Explicit Runge-Kutta methods with and without embedded error estimates
  • Error control and step adaptivity
  • Three functions for integrating IVPs. The core function is IntegrateAdaptive() that integrates an initial-value problem with local error control and adaptive step size. The other two functions, IntegrateConst() and IntegrateNSteps() integrate an IVP problem using a fixed step size. These two functions are almost identical and could/should be simplified.
  • A couple of tests and examples. The tests could be considerably improved.
What is not implemented, but would be nice to have:
  • Linear multistep methods (LMM): Currently the code is focused on Runge-Kutta methods that are slightly easier to implement because there is no startup phase. There is a LinearMultistep struct, but it is just a sketch. It is possible that LMM will require the API to be changed.
  • Interpolation for dense output and root-finding: It is always possible to form a cubic Hermite interpolant from values and derivatives at both sides of a step. However, some Runge-Kutta methods allow forming a higher-order interpolant at no or very low additional cost (a couple of function evaluations). But I have not yet decided on a good API for that. Typically, output is needed only on a small fraction of steps, so it seems wasteful to prepare data for interpolation on every step (even if no additional rhs evaluations are necessary).
  • Implicit methods: Without implicit methods we cannot solve stiff problems, but for implicit methods we need to solve systems of non-linear equations and we don't have such methods in gonum yet. Also, implicit methods need the Jacobian of the right-hand side function and the current API does not included it. And implementing a solver for stiff problems is a lot of work.
The code is almost completely documented. I will appreciate any feedback.

Vladimir

Brendan Tracey

unread,
Aug 25, 2015, 1:52:57 AM8/25/15
to Vladimír Chalupecký, gonum-dev
Awesome! I’ll take a look when I get a chance.

Brendan Tracey

unread,
Sep 6, 2015, 10:57:04 PM9/6/15
to gonum-dev, vladimir....@gmail.com
Blitz of thoughts in no particular order

WeightedRMSNorm  can instead be stat.Moment. Doesn't seem like it needs to be exported

Controller seems like a bad name. Why not Filter?

Is the point of the package to solve ODEs, or also to support external solvers?

Are the control theory terms (I, PI, PID) typical in ODE? It seems like it may be better to use descriptive names

Should the controller have the constants exported?

Types (like DOPRI45) should probably comment that most users will use the constructors.

I think one of the RK types should just be RungeKutta (whichever is more common).

I don't think there's any point in the Function type. Just have it be a literal.

I wonder if there's a nice way to combine the adaptive and the non-adaptive. It would be really nice to be able to say ode.Integrate(problem, settings, method). Then the method could determine the adaptivity. I would imagine that most users (who just want their ODE solved)  don't want to have to make the decision between adaptive and non-adaptive. The comments for Integrate could illuminatie how to make basic decisions if the behavior isn't what they'd like.

I'd instead rather see the division in Integrate functions be on Problem than on Method. So, rather than Integrate and IntegrateAdaptive, I'd rather see something like IntegrateIVP. It seems much harder to put all problems in one box. It's also really nice to read
ode.Integrate(myfun, 0, t,  0, settings, method)

Maybe the From,To should be a [2]float64?

Seems like some of the Settings should be in the Method (like first step). There could be an AdaptiveSettings that is embedded by the adaptive methods?

Does State need to be exported?

Are there many different WeightedNorms for it to need to be a specific type?

You say that implicit methods need a non-linear solver. In my experience, implicit methods usually just need an iterative linear solver (because the problem sizes are huge), such at Gmres or Bcgstab.

Vladimír Chalupecký

unread,
Sep 29, 2015, 4:35:24 AM9/29/15
to gonum-dev, vladimir....@gmail.com
Thanks for taking a look and sorry for my delayed reply, busy with optimize :-)
 
WeightedRMSNorm  can instead be stat.Moment. Doesn't seem like it needs to be exported

Ah, indeed, stat.MomentAbout can be used and yes, WeightedRootMeanSquareNorm() does not need to be exported.

Controller seems like a bad name. Why not Filter?

I am sure I considered Filter but then for some reason kept Controller. I will change it to Filter or some variant of it.
I think all names in the package are open and should be considered for improvement.
 
Is the point of the package to solve ODEs, or also to support external solvers?

I would like to support also external solvers, like we do in optimize.

Are the control theory terms (I, PI, PID) typical in ODE? It seems like it may be better to use descriptive names

Not sure if typical, but the papers that use these controllers to control the step size do use the control theory terms (like the reference in Controller's comment).
I will try to think of better names.
 
Should the controller have the constants exported?

Probably it should. I will change it.
 
Types (like DOPRI45) should probably comment that most users will use the constructors.

Ok, I will add it. 

I think one of the RK types should just be RungeKutta (whichever is more common).

I think that _the_ Runge-Kutta method is most often associated with the constant-stepsize fourth-order Runge-Kutta method. Yes, I can rename it to RungeKutta.
 
I don't think there's any point in the Function type. Just have it be a literal.

Ok.
 
I wonder if there's a nice way to combine the adaptive and the non-adaptive. It would be really nice to be able to say ode.Integrate(problem, settings, method). Then the method could determine the adaptivity. I would imagine that most users (who just want their ODE solved)  don't want to have to make the decision between adaptive and non-adaptive. The comments for Integrate could illuminatie how to make basic decisions if the behavior isn't what they'd like.
 
I'd instead rather see the division in Integrate functions be on Problem than on Method. So, rather than Integrate and IntegrateAdaptive, I'd rather see something like IntegrateIVP. It seems much harder to put all problems in one box. It's also really nice to read
ode.Integrate(myfun, 0, t,  0, settings, method)

Yes, this is one of the important design points. I would also prefer having just one integration method, but I think that you expressed at some point a wish to have more integration routines.

One option would be having e.g. Settings.FixedStep bool and decide on its value. There would have to be another field Settings.Step
which would give the actual step size (alternatively, the decision could be made based on zero or non-zero of Step).
If the decision is "be adaptive" and Method is adaptive, everything is fine. If the Method is not adaptive then error? panic? default step size?

I like IntegrateIVP(). Later on there could be SolveBVP()? Do you like grouping the problem under ode.InitialValueProblem as it is now (I do) or should the parameters be passed to IntegrateIVP() separately?

Maybe the From,To should be a [2]float64?

I quite like From and To (alternatively Start and End). Why do you think it should be [2]float64?
 
Seems like some of the Settings should be in the Method (like first step). There could be an AdaptiveSettings that is embedded by the adaptive methods?

I am not sure, anything is possible. It is true that most of the current Settings are relevant only for adaptive methods (there is not much to set for constant stepsize solvers).
 
Does State need to be exported?

If we want to support external Steppers, then I think yes.
 
Are there many different WeightedNorms for it to need to be a specific type?

There is at least one that uses Inf-norm instead. Someone might want to use that (e.g., for comparison with a paper or what not) or have a norm that ignores the weights.
Or have a norm that is stricter in some regions of the phase space, e.g. a satellite in the vicinity of a body.
I wanted the norm to be configurable, other packages offer that too (Arkode serves me often as a great base for implementation details).
The default norm should definitely be the weighted root mean square norm.

As a side note, thanks to the ode package I grew fond of the WRMS norm which you originally had in optimize.
The maximum norm is non-differentiable, so the error norm may have irregular behavior, which is not nice if you want to have
nice, smoothly varying sequences of stepsizes and continuous dependence of the amount of work on the given tolerances.
Just saying in case we want the norm configurable in optimize too.
 
You say that implicit methods need a non-linear solver. In my experience, implicit methods usually just need an iterative linear solver (because the problem sizes are huge), such at Gmres or Bcgstab.

My motivation for having implicit methods are stiff problems independently of whether they are large scale or not (also, it is so difficult to find non-stiff testing problems, vast majority of them are stiff).
With an explicit solver, to get a stable computation the stepsize must be very small so you get way much more accuracy than you asked for.
Implicit methods are stable with bigger stepsizes and are much more efficient for stiff problems. One implicit step is more expensive than an explicit one, but you have to do less steps which compensates the cost.
An implicit method leads to solving a non-linear system G(y) = 0.
We can use a functional iteration that does not need the Jacobian of the right-hand side but that is again useful only for non-stiff system.
Or we can use Newton iteration which indeed needs to solve linear systems but without any additional ingredients its just a local method.
Anyway, here the things get complicated: small/large sized problems, sparse Jacobians, variants of the Newton method (with line-search, ...), direct/iterative linear solvers ...

It is so much work, let's focus on the general API and the implementation of explicit methods.

What I see as key points are:

* The integration function: if we agree on ode.IntegrateIVP, this point will be closed. Open subpoints would be how to request integration with constant stepsize and how to pass
problem to the function (separately or in a InitialValueProblem struct). Regarding constant stepsize, I think it would be ok if using it required some effort from the user (like setting something to true or non-zero value),
it should be used only in special circumstances. Also, testing integration with constant stepsize can be reasonably done only on trivial problems that do not require adaptivity (moreover non-stiff),
otherwise they will take an eternity to finish or there will be a large error from the exact solution.

* Stepper and ErrorStepper interfaces: what is their responsibility, what data they need, what they return, how to signal to them that a step has been rejected,
some steppers can also interpolate the solution within the last step, some need additional function evaluations for that,
should they use reverse communication interface, if so, how does it fit in with the evaluations needed for the interpolation?
These are all questions that I tried to answer in my package but I would feel more comfortable if they were reviewed and considered also by someone else.

* Dense output (= output at time points other than integration points, useful for output or for event, when we want to integrate to an apriori unknown time point that satisfies some g(t, y(t)) = 0):
Typically, solvers get a sequence of time points where the solution is needed. Then there are two options:
1) They integrate and as they arrive to an output point, they gradually reduce the stepsize and hit the output point exactly.
Advantages: the solver can use the output points to heuristically guess the scale of the problem (and therefore the initial stepsize), and also it returns a solution that it has actually integrated to that time point.
Disadvantage: output affects the integration which does not seem like a natural thing to take place. Having to hit the output point exactly disrupts the smooth sequence of stepsizes and errors that we hope for.
2) The solver integrates ignoring the output points, only when it passes an output point it returns a solution obtained from interpolation.
Advantages: output does not affect the integration result.
Disadvantages: an interpolator of sufficiently high order must be available; the interpolated solution might have an error. (As I wrote previously, we can always do cubic polynomial interpolation which is good for interpolating y from methods of up to 4th order).

I am strongly inclined towards 2): interpolation might require extra evaluations, but for stringent tolerances output steps with interpolation will be infrequent and for loose tolerance there will be few steps anyway.
Then the remaining question again is how to design an interface.

I will address all the points above and will review how I implement the three * points above. When the repository is updated, I'll let you know again.

V.

Brendan Tracey

unread,
Sep 29, 2015, 11:46:19 AM9/29/15
to gonum-dev, vladimir....@gmail.com

 
I wonder if there's a nice way to combine the adaptive and the non-adaptive. It would be really nice to be able to say ode.Integrate(problem, settings, method). Then the method could determine the adaptivity. I would imagine that most users (who just want their ODE solved)  don't want to have to make the decision between adaptive and non-adaptive. The comments for Integrate could illuminatie how to make basic decisions if the behavior isn't what they'd like.
 
I'd instead rather see the division in Integrate functions be on Problem than on Method. So, rather than Integrate and IntegrateAdaptive, I'd rather see something like IntegrateIVP. It seems much harder to put all problems in one box. It's also really nice to read
ode.Integrate(myfun, 0, t,  0, settings, method)

Yes, this is one of the important design points. I would also prefer having just one integration method, but I think that you expressed at some point a wish to have more integration routines.

I think I only expressed it may be necessary to split into multiple routines. If we can express a rich set of problem types with one function call, then great.
 

One option would be having e.g. Settings.FixedStep bool and decide on its value. There would have to be another field Settings.Step
which would give the actual step size (alternatively, the decision could be made based on zero or non-zero of Step).

Or if step == -1 then adptive
 
If the decision is "be adaptive" and Method is adaptive, everything is fine. If the Method is not adaptive then error? panic? default step size?

Probably panic. I assume the user would have had to explicitly state they wanted a fixed step and an adaptive method.
 

I like IntegrateIVP(). Later on there could be SolveBVP()? Do you like grouping the problem under ode.InitialValueProblem as it is now (I do) or should the parameters be passed to IntegrateIVP() separately?

I can go either way. It's small enough that it's easy to pass as parameters. Naming it IVP may be better, the full spelled out is long.
 

Maybe the From,To should be a [2]float64?

I quite like From and To (alternatively Start and End). Why do you think it should be [2]float64?
 
Seems like some of the Settings should be in the Method (like first step). There could be an AdaptiveSettings that is embedded by the adaptive methods?

I am not sure, anything is possible. It is true that most of the current Settings are relevant only for adaptive methods (there is not much to set for constant stepsize solvers).
 
Does State need to be exported?

If we want to support external Steppers, then I think yes.
 
Are there many different WeightedNorms for it to need to be a specific type?

There is at least one that uses Inf-norm instead. Someone might want to use that (e.g., for comparison with a paper or what not) or have a norm that ignores the weights.
Or have a norm that is stricter in some regions of the phase space, e.g. a satellite in the vicinity of a body.

Fair.
 
I wanted the norm to be configurable, other packages offer that too (Arkode serves me often as a great base for implementation details).
The default norm should definitely be the weighted root mean square norm.

Maybe we can just call it Norm. There is no unweighted norm, and it's clear from the documentation / signature that there are weights (though, change the input name to weights so autocomplete makes it especially clear)
 

As a side note, thanks to the ode package I grew fond of the WRMS norm which you originally had in optimize.
The maximum norm is non-differentiable, so the error norm may have irregular behavior, which is not nice if you want to have
nice, smoothly varying sequences of stepsizes and continuous dependence of the amount of work on the given tolerances.
Just saying in case we want the norm configurable in optimize too.

I think this is an example of different domains have different solutions. You frequently want to take derivatives in ODE solutions. You don't normally care about the derivative to your gradient tolerance.
 
 
You say that implicit methods need a non-linear solver. In my experience, implicit methods usually just need an iterative linear solver (because the problem sizes are huge), such at Gmres or Bcgstab.

My motivation for having implicit methods are stiff problems independently of whether they are large scale or not (also, it is so difficult to find non-stiff testing problems, vast majority of them are stiff).
With an explicit solver, to get a stable computation the stepsize must be very small so you get way much more accuracy than you asked for.
Implicit methods are stable with bigger stepsizes and are much more efficient for stiff problems.

Right.
 
One implicit step is more expensive than an explicit one, but you have to do less steps which compensates the cost.
An implicit method leads to solving a non-linear system G(y) = 0.

Oh, I see, the nonlinear is because the function itself is nonlinear. I was confused because an implicit solver for a wave equation is a linear operator. I've been living in the PDE world too much where we just make linear expansions so it's a linear solve. 
 
We can use a functional iteration that does not need the Jacobian of the right-hand side but that is again useful only for non-stiff system.
Or we can use Newton iteration which indeed needs to solve linear systems but without any additional ingredients its just a local method.
Anyway, here the things get complicated: small/large sized problems, sparse Jacobians, variants of the Newton method (with line-search, ...), direct/iterative linear solvers ...

It is so much work, let's focus on the general API and the implementation of explicit methods.

Agreed.
 

What I see as key points are:

* The integration function: if we agree on ode.IntegrateIVP, this point will be closed.

I think it's a good place to start to see how we like it.
 
Open subpoints would be how to request integration with constant stepsize and how to pass
problem to the function (separately or in a InitialValueProblem struct). Regarding constant stepsize, I think it would be ok if using it required some effort from the user (like setting something to true or non-zero value),

Yea, I don't think step size can be reasonably set automatically.
 
it should be used only in special circumstances. Also, testing integration with constant stepsize can be reasonably done only on trivial problems that do not require adaptivity (moreover non-stiff),
otherwise they will take an eternity to finish or there will be a large error from the exact solution.

* Stepper and ErrorStepper interfaces: what is their responsibility, what data they need, what they return, how to signal to them that a step has been rejected,
some steppers can also interpolate the solution within the last step, some need additional function evaluations for that,
should they use reverse communication interface, if so, how does it fit in with the evaluations needed for the interpolation?
These are all questions that I tried to answer in my package but I would feel more comfortable if they were reviewed and considered also by someone else.

I can try to look in the weeds if you'd like.

One (of the many) major differences between Optimize and Integrate is the relative overhead cost. Optimization functions are frequently much more expensive than the overhead of taking the step size. ODEs, on the other hand, frequently have a cheap A(x). As a result, the overhead of going through an interface like Stepper may be significant (instead of it being coded directly). Similarly, computing all of the stats may have a significant overhead, and thus worth having a boolean about recording. It is possible that it is better to have something like

type IVPIntegrator interface {
    IntegrateIVP(IVP) Result // or whatever the real signature is
}

Then RK4 is implemented explicitly. Where there is overlap between the methods, we can just construct explicit functions.

This may be less useful for supporting outside solvers as it takes more work, but we could at least provide a lot of the building blocks. Unfortunate, but ODE is closer to mat64 than Optimize in not wanting to sacrifice performance.
 

Brendan Tracey

unread,
Sep 29, 2015, 11:49:34 AM9/29/15
to gonum-dev, vladimir....@gmail.com

* Dense output (= output at time points other than integration points, useful for output or for event, when we want to integrate to an apriori unknown time point that satisfies some g(t, y(t)) = 0):
Typically, solvers get a sequence of time points where the solution is needed. Then there are two options:
1) They integrate and as they arrive to an output point, they gradually reduce the stepsize and hit the output point exactly.
Advantages: the solver can use the output points to heuristically guess the scale of the problem (and therefore the initial stepsize), and also it returns a solution that it has actually integrated to that time point.
Disadvantage: output affects the integration which does not seem like a natural thing to take place. Having to hit the output point exactly disrupts the smooth sequence of stepsizes and errors that we hope for.
2) The solver integrates ignoring the output points, only when it passes an output point it returns a solution obtained from interpolation.
Advantages: output does not affect the integration result.
Disadvantages: an interpolator of sufficiently high order must be available; the interpolated solution might have an error. (As I wrote previously, we can always do cubic polynomial interpolation which is good for interpolating y from methods of up to 4th order).

I am strongly inclined towards 2): interpolation might require extra evaluations, but for stringent tolerances output steps with interpolation will be infrequent and for loose tolerance there will be few steps anyway.
Then the remaining question again is how to design an interface.

Agreed in principle on 2. However, output points could affect the solution, for example Adjoint techniques that would refine to get the solution at the output points correct and not worry about the solution where it isn't needed to high accuracy.
 

Vladimír Chalupecký

unread,
Sep 29, 2015, 9:01:31 PM9/29/15
to gonum-dev, vladimir....@gmail.com
Thanks for the comments.

> Or if step == -1 then adptive

That's would be kind of ambiguous, because we might be solving a final-value problem and be integrating back in time.

Anyway, writing the yesterday's short novel, errrr, I mean the post, made me think about the API again and I may have an idea how the steppers and step control might work together.
Actuallly, it is in a sense a combination of Timur's and mine approaches. I will see how it works out.

>> I like IntegrateIVP(). Later on there could be SolveBVP()? Do you like grouping the problem under ode.InitialValueProblem as it is now (I do) or should the parameters be passed to IntegrateIVP() separately?
>
> I can go either way. It's small enough that it's easy to pass as parameters. Naming it IVP may be better, the full spelled out is long.

For the moment I will rename it to IVP and keep it, we can always change it to parameters.

Maybe we can just call it Norm.

Norm be it.

One (of the many) major differences between Optimize and Integrate is the relative overhead cost. Optimization functions are frequently much more expensive than the overhead of taking the step size.

Even if in optimization the overhead does not matter so much, I don't think we have so much overhead in the optimize package or that we are sacrifying performance there.
The optimize loop is very lean, it only evaluates, Location is not invalidated and Methods have all to information to minimize any additional copying. Directioner has to do some stuff but that's inevitable just as it is with Steppers.
The only overhead comes from the reverse communication because we are calling something all the time.
 
ODEs, on the other hand, frequently have a cheap A(x). As a result, the overhead of going through an interface like Stepper may be significant (instead of it being coded directly).
Similarly, computing all of the stats may have a significant overhead, and thus worth having a boolean about recording. 

It all depends. My previous use of ode solvers was for semi-discretized PDEs where the right-hand side is not cheap.
The overhead comes from rejected steps because one step means several function evaluations that have to be redone (in case of implicit solvers that includes also the linear solves, Jacobian evaluation, ...)
and from the stepper formula. I think that the ovehead of going through an interface and collecting statistics is minor, even negligible.
I think it is more important to have good steppers, error estimates and step controllers, all trying to minimize rejected steps while satisfying the accuracy.

type IVPIntegrator interface {
    IntegrateIVP(IVP) Result // or whatever the real signature is
}

This is rather too crude for my taste, but I might be wrong. I hope we can do better.

Agreed in principle on 2. However, output points could affect the solution, for example Adjoint techniques that would refine to get the solution at the output points correct and not worry about the solution where it isn't needed to high accuracy.

I don't know the details, but it sounds like this could be approached with having a Norm that would smoothly become stricter near the output points but the solution at the output point itself would still be obtained through interpolation.

V.
Reply all
Reply to author
Forward
0 new messages