y'(t) = f(t, y(t)), t0 < t <= tf,
y(t0) = y0,
ode.Integrate(sys ode.System, y0 []float64, t0, tf float64, s *ode.Settings, m ode.Stepper) // If s or m are nil, use defaults.
Func func(t float64, y, yDot []float64) // Right-hand side of the system. Must be non-nil.
Jacobian func(t float64, y []float64, jac mat64.Matrix) // Optional, in a distant future implicit solvers would require this to be non-nil.
Init(y0 []float64) // Allocate and reuse memory, initialize solver's parameters, etc.
// Try to take a dt step from y(t) and store the result in yNext.
Step(sys System, t, dt float64, y, yNext []float64, ???) (???) // Not sure about details of this.
ErrorEstimate(e []float64) float64 // Not sure about details of this either.
I think that first of all we should make clear what kind of problems from the mathematical point of view we want to solve in the ode package. Brendan's mention of boundary conditions suggests that he's also considering boundary-value problems, but I think that the most useful and most natural would be to first focus on initial-value problems for systems of ODE's. Given our limited manpower that could entertain us for quite a while.
Stepper interface is the most important, but unfortunately also the most unclear element. It largely depends on what kind of integrators we want to have: fixed-step or variable-step with local error control integrators? I would argue for going directly for variable-step solvers.
As for the stencils, explicit and implicit Runge-Kutta methods can be represented by a kind of stencil called the Butcher array (or tableau), but naturally there are methods than cannot be expressed so conveniently. Still, it would be probably a good idea to start with implementing a generic Runge-Kutta stepper whose configurable part is only the Butcher array (plus possibly bools indicating if a given variant is so-called First Same As Last, see e.g. http://en.wikipedia.org/wiki/Dormand-Prince_method, or if local exptrapolation is used or not). Of course, if we go for variable-step only intergrators, then the set of available Runge-Kutta methods becomes limited.
Stepper interface is the most important, but unfortunately also the most unclear element. It largely depends on what kind of integrators we want to have: fixed-step or variable-step with local error control integrators? I would argue for going directly for variable-step solvers.
I also think it is quite reasonable to have multiple entry points to the package. The advantage to non-adaptive methods is that their computation time is very predictable. Possibly we could have an ode.Adaptive(stuff) and ode.Fixed(stuff) , where ode.Integrate does something reasonable. I'm not making any decisions, but I don't think we should feel forced to have only one entry point (though, I do think the most obvious thing should do the thing that is normally right)
As for the stencils, explicit and implicit Runge-Kutta methods can be represented by a kind of stencil called the Butcher array (or tableau), but naturally there are methods than cannot be expressed so conveniently. Still, it would be probably a good idea to start with implementing a generic Runge-Kutta stepper whose configurable part is only the Butcher array (plus possibly bools indicating if a given variant is so-called First Same As Last, see e.g. http://en.wikipedia.org/wiki/Dormand-Prince_method, or if local exptrapolation is used or not). Of course, if we go for variable-step only intergrators, then the set of available Runge-Kutta methods becomes limited.
Many of the stencils are of the form x^{i+1} = alpha x^{i-1} + beta x^{i}, or whatever. In Go, we have the advantage over something like python or matlab in that it could be really easy to express a banded matrix. We have the BLAS routines implemented, and we can add a Banded matrix type to mat64 if there's a reason for it. This would save significantly on storage and computation costs over mat64.Dense. For implicit solves we would have two.
Actually, maybe they aren't needed for ODE, but if we wanted to have a solver for boundary values on both sides it's available.
Many of the stencils are of the form x^{i+1} = alpha x^{i-1} + beta x^{i}, or whatever. In Go, we have the advantage over something like python or matlab in that it could be really easy to express a banded matrix. We have the BLAS routines implemented, and we can add a Banded matrix type to mat64 if there's a reason for it. This would save significantly on storage and computation costs over mat64.Dense. For implicit solves we would have two.
x_{i+2} + alpha_1 x_{i+1} + alpha_2 x_{i} = step * (beta_2 f_{i+2} + beta_1 f_{i+1} + beta_0 f_{i}),
type Problem struct {
// required parameters
YP YPrime // function to compute right side of the system
Y0 []float64 // initial values
T []float64 // time points to find solution at, T[0] = t0, T[len(T)-1:] = tEnd, len(t) >= 2
// parameters with default values
Dt0 float64 // initial time step
ATol float64 // absolute error tolerance
RTol float64 // relative error tolerance
Safety float64 // safety factor on new step selection
}
func (p *Problem) SanityCheck() error
func (p Problem) Solve() ([][]float64, error)
type solverStep func(t float64, y []float64, dt float64) (float64, []float64, float64)
func NewSolverStepRKM(yp YPrime, nEq int, p Problem) solverStep
func NewSolverStepRK4(yp YPrime, nEq int) solverStep
func newRK4Step(yp ypFunc, prm prm) ypStepFunc {
// Runge-Kutta matrix
a := [4][3]float64{
{0, 0, 0},
{0.5, 0, 0},
{0, 0.5, 0},
{0, 0, 1.},
}
// weigths
b := [4]float64{(1. / 6.), (1. / 3.), (1. / 3.), (1. / 6.)}
// nodes
c := [4]float64{0, 0.5, 0.5, 1.}
// we only want to allocate memory for increments once to avoid excessive gc
dy := make([][]float64, len(c))
for i := range c {
dy[i] = make([]float64, prm.nEq)
}
solverStep :=
func(t float64, y []float64, dt float64) (float64, []float64) {
copy(dy[0], y)
dy[0] = yp(t, dy[0], prm)
for i := 1; i < len(c); i++ {
copy(dy[i], y)
for iEq := range y {
for j := 0; j < i; j++ {
dy[i][iEq] += a[i][j] * dy[j][iEq] * dt
}
}
dy[i] = yp(t+dt*c[i], dy[i], prm)
}
for iEq := range y {
for i := range b {
y[iEq] += dt * dy[i][iEq] * b[i]
}
}
return t + dt, y
}
return solverStep
}
...
type solverStep func(t float64, y []float64, dt float64) (float64, []float64, float64)
Downside is that now client has to set up ode.Problem structure and call Problem.Solve(), rather than call ode.Solve(<list of parameters>).
Adaptive vs Non-Adaptive step.It doesn't appear to be a real issue to me so far. It can be an option in Problem struct, that is handled by Solve(), or client can take full control over the process using provided NewSolverStepXXX() directly.Solve() already takes care of adjusting time step to provide solution at exact times provided by Problem.T.
Reusing the solver without allocating new memory.Main problem I see is that it might make solver unsafe for concurrent use. Current implementation is unsafe either, but if it will ever be, solvers should have their own copies of everything.I believe that this optimization might wait until overall design is stabilized.
I wrote about performance of Butcher array implementation before. It might be faster if I make use of gonum/matrix package. I will look at it.
Having a Problem struct that contains the right-hand side function, start and end times and the initial condition is good. But I think that the T slice, the tolerances and the other parameters do not belong there, they are not part of the problem statement.
Solve() should not be a method on Problem, rather a function taking Problem as an argument. I would prefer to call it Integrate rather than Solve: ode.Integrate(Problem, Settings, Stepper, ...).
Solve() returning the whole integration history is overkill, it should really be done via some kind of recorders. It may then save the whole history in memory if that is the indeed the desired behavior, or just dump it to the disk, or anything else.
I think it is a real issue. Not every stepper can provide an error estimate so not every stepper can be used for adaptive time-stepping. What you are doing is to return the next step size from the stepper, so every stepper able to adapt time step 1) has to reimplement the adaptive logic 2) the logic cannot be configured or replaced with another one 3) the stepper cannot be used for non-adaptive time-stepping. I think that it would be better if a stepper did just what the name implies: to step from one time to the next. And those steppers that are able to could also return an error estimate that can be used externally (e.g., by an Integrate() function) to adapt the time step (and the adaptivity can be another interface).
Once the right-hand side of the system becomes expensive to evaluate, the cost of adding together a couple of slices does not really matter.
--
You received this message because you are subscribed to the Google Groups "gonum-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gonum-dev+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
WeightedRMSNorm can instead be stat.Moment. Doesn't seem like it needs to be exported
Controller seems like a bad name. Why not Filter?
Is the point of the package to solve ODEs, or also to support external solvers?
Are the control theory terms (I, PI, PID) typical in ODE? It seems like it may be better to use descriptive names
Should the controller have the constants exported?
Types (like DOPRI45) should probably comment that most users will use the constructors.
I think one of the RK types should just be RungeKutta (whichever is more common).
I don't think there's any point in the Function type. Just have it be a literal.
I wonder if there's a nice way to combine the adaptive and the non-adaptive. It would be really nice to be able to say ode.Integrate(problem, settings, method). Then the method could determine the adaptivity. I would imagine that most users (who just want their ODE solved) don't want to have to make the decision between adaptive and non-adaptive. The comments for Integrate could illuminatie how to make basic decisions if the behavior isn't what they'd like.
I'd instead rather see the division in Integrate functions be on Problem than on Method. So, rather than Integrate and IntegrateAdaptive, I'd rather see something like IntegrateIVP. It seems much harder to put all problems in one box. It's also really nice to read
ode.Integrate(myfun, 0, t, 0, settings, method)
Maybe the From,To should be a [2]float64?
Seems like some of the Settings should be in the Method (like first step). There could be an AdaptiveSettings that is embedded by the adaptive methods?
Does State need to be exported?
Are there many different WeightedNorms for it to need to be a specific type?
You say that implicit methods need a non-linear solver. In my experience, implicit methods usually just need an iterative linear solver (because the problem sizes are huge), such at Gmres or Bcgstab.
I wonder if there's a nice way to combine the adaptive and the non-adaptive. It would be really nice to be able to say ode.Integrate(problem, settings, method). Then the method could determine the adaptivity. I would imagine that most users (who just want their ODE solved) don't want to have to make the decision between adaptive and non-adaptive. The comments for Integrate could illuminatie how to make basic decisions if the behavior isn't what they'd like.
I'd instead rather see the division in Integrate functions be on Problem than on Method. So, rather than Integrate and IntegrateAdaptive, I'd rather see something like IntegrateIVP. It seems much harder to put all problems in one box. It's also really nice to read
ode.Integrate(myfun, 0, t, 0, settings, method)Yes, this is one of the important design points. I would also prefer having just one integration method, but I think that you expressed at some point a wish to have more integration routines.
One option would be having e.g. Settings.FixedStep bool and decide on its value. There would have to be another field Settings.Stepwhich would give the actual step size (alternatively, the decision could be made based on zero or non-zero of Step).
If the decision is "be adaptive" and Method is adaptive, everything is fine. If the Method is not adaptive then error? panic? default step size?
I like IntegrateIVP(). Later on there could be SolveBVP()? Do you like grouping the problem under ode.InitialValueProblem as it is now (I do) or should the parameters be passed to IntegrateIVP() separately?
Maybe the From,To should be a [2]float64?I quite like From and To (alternatively Start and End). Why do you think it should be [2]float64?Seems like some of the Settings should be in the Method (like first step). There could be an AdaptiveSettings that is embedded by the adaptive methods?I am not sure, anything is possible. It is true that most of the current Settings are relevant only for adaptive methods (there is not much to set for constant stepsize solvers).Does State need to be exported?If we want to support external Steppers, then I think yes.Are there many different WeightedNorms for it to need to be a specific type?There is at least one that uses Inf-norm instead. Someone might want to use that (e.g., for comparison with a paper or what not) or have a norm that ignores the weights.Or have a norm that is stricter in some regions of the phase space, e.g. a satellite in the vicinity of a body.
I wanted the norm to be configurable, other packages offer that too (Arkode serves me often as a great base for implementation details).The default norm should definitely be the weighted root mean square norm.
As a side note, thanks to the ode package I grew fond of the WRMS norm which you originally had in optimize.The maximum norm is non-differentiable, so the error norm may have irregular behavior, which is not nice if you want to havenice, smoothly varying sequences of stepsizes and continuous dependence of the amount of work on the given tolerances.Just saying in case we want the norm configurable in optimize too.
You say that implicit methods need a non-linear solver. In my experience, implicit methods usually just need an iterative linear solver (because the problem sizes are huge), such at Gmres or Bcgstab.My motivation for having implicit methods are stiff problems independently of whether they are large scale or not (also, it is so difficult to find non-stiff testing problems, vast majority of them are stiff).With an explicit solver, to get a stable computation the stepsize must be very small so you get way much more accuracy than you asked for.Implicit methods are stable with bigger stepsizes and are much more efficient for stiff problems.
One implicit step is more expensive than an explicit one, but you have to do less steps which compensates the cost.An implicit method leads to solving a non-linear system G(y) = 0.
We can use a functional iteration that does not need the Jacobian of the right-hand side but that is again useful only for non-stiff system.Or we can use Newton iteration which indeed needs to solve linear systems but without any additional ingredients its just a local method.Anyway, here the things get complicated: small/large sized problems, sparse Jacobians, variants of the Newton method (with line-search, ...), direct/iterative linear solvers ...It is so much work, let's focus on the general API and the implementation of explicit methods.
What I see as key points are:* The integration function: if we agree on ode.IntegrateIVP, this point will be closed.
Open subpoints would be how to request integration with constant stepsize and how to passproblem to the function (separately or in a InitialValueProblem struct). Regarding constant stepsize, I think it would be ok if using it required some effort from the user (like setting something to true or non-zero value),
it should be used only in special circumstances. Also, testing integration with constant stepsize can be reasonably done only on trivial problems that do not require adaptivity (moreover non-stiff),otherwise they will take an eternity to finish or there will be a large error from the exact solution.* Stepper and ErrorStepper interfaces: what is their responsibility, what data they need, what they return, how to signal to them that a step has been rejected,some steppers can also interpolate the solution within the last step, some need additional function evaluations for that,should they use reverse communication interface, if so, how does it fit in with the evaluations needed for the interpolation?These are all questions that I tried to answer in my package but I would feel more comfortable if they were reviewed and considered also by someone else.
* Dense output (= output at time points other than integration points, useful for output or for event, when we want to integrate to an apriori unknown time point that satisfies some g(t, y(t)) = 0):Typically, solvers get a sequence of time points where the solution is needed. Then there are two options:1) They integrate and as they arrive to an output point, they gradually reduce the stepsize and hit the output point exactly.Advantages: the solver can use the output points to heuristically guess the scale of the problem (and therefore the initial stepsize), and also it returns a solution that it has actually integrated to that time point.Disadvantage: output affects the integration which does not seem like a natural thing to take place. Having to hit the output point exactly disrupts the smooth sequence of stepsizes and errors that we hope for.2) The solver integrates ignoring the output points, only when it passes an output point it returns a solution obtained from interpolation.Advantages: output does not affect the integration result.Disadvantages: an interpolator of sufficiently high order must be available; the interpolated solution might have an error. (As I wrote previously, we can always do cubic polynomial interpolation which is good for interpolating y from methods of up to 4th order).I am strongly inclined towards 2): interpolation might require extra evaluations, but for stringent tolerances output steps with interpolation will be infrequent and for loose tolerance there will be few steps anyway.Then the remaining question again is how to design an interface.
Maybe we can just call it Norm.
One (of the many) major differences between Optimize and Integrate is the relative overhead cost. Optimization functions are frequently much more expensive than the overhead of taking the step size.
ODEs, on the other hand, frequently have a cheap A(x). As a result, the overhead of going through an interface like Stepper may be significant (instead of it being coded directly).
Similarly, computing all of the stats may have a significant overhead, and thus worth having a boolean about recording.
type IVPIntegrator interface {
IntegrateIVP(IVP) Result // or whatever the real signature is
}
Agreed in principle on 2. However, output points could affect the solution, for example Adjoint techniques that would refine to get the solution at the output points correct and not worry about the solution where it isn't needed to high accuracy.