On Fri, 2015-08-07 at 23:32 -0700, Brendan Tracey wrote:
> Here is my opinion on the packages:
>
> BLAS -- blas/ blas/native blas/cgo are done. The only thing I would
> consider changing in blas64 are the constant inputs, for example swapping
> blas.Transpose and blas.Diag for booleans. Otherwise I am happy with the
> API.
Agreed except that blas.Transpose and others are fixed by interop with
the cgo-backing implementation and transpose is a ternary value with H
being the third state. The other issue is that with the low level
packages I would like to keep the zero value for these as invalid for
safety reasons.
> LAPACK -- Same as for BLAS. The signatures are mostly constrained by the
> lapack functions.
Also agreed. We do need to have a caveat on the API stability promise
here that the Float64 interface is subject to change as functions are
added.
> Matrix:
> - I would like to see the changes suggested in 38, moving many of the
> methods into functions (like Col)
> - The Solve changes (suggested in 169)
> - Do we still want maybe/maybefloat?
Do we want to have optional errors? If not then no.
> - Deleting of interfaces (138)
> - Inverse should be a method, not a function (ala 38)
> - Should we have a Vectorer interface instead of just *Vector (like the
> other types?). On the one hand, there's not much to be gained with the
> interface, but on the other hand it may help consistency if/when we add
> sparse support? I'm not sure.
> - Dense.Norm should be a function (ala 38), but we should consider it
> taking in a constant type. Unlike floats.Norm, we don't support an
> arbitrary matrix norm, only a specific subset. Are the negative norms even
> matrix norms? Should they really be supported? (Just curious)
I just picked things that were provided by numpy.
> - We need to find how SVD/Eigen should be represented.
> - Is there a better way to go from decomposition types into triangular? We
> have "LFrom", but there are different Ls, as in LU and LQ. Right now LFrom
> takes in a *LU, but that cannot support both types. There's an additional
> complication that the L's are different, in that sometimes they have unit
> diagonal, sometimes not
Can we define an interface that gets this information across?
Agreed except that blas.Transpose and others are fixed by interop with
the cgo-backing implementation and transpose is a ternary value with H
being the third state. The other issue is that with the low level
packages I would like to keep the zero value for these as invalid for
safety reasons.
Agree. I wouldn't change it in native/cgo. It's only if we wanted to change it in blas64.
> - Do we still want maybe/maybefloat?
Do we want to have optional errors? If not then no.
What do you mean by optional errors?
Agreed except that blas.Transpose and others are fixed by interop with
the cgo-backing implementation and transpose is a ternary value with H
being the third state. The other issue is that with the low level
packages I would like to keep the zero value for these as invalid for
safety reasons.
Agree. I wouldn't change it in native/cgo. It's only if we wanted to change it in blas64.
I think that unnecessarily specialises things - remember that we support complex blas at that level too.
> - Do we still want maybe/maybefloat?
Do we want to have optional errors? If not then no.
What do you mean by optional errors?
The original design intention was to allow people to wrap potentially erroring functions with the maybe functions and have the panics that we make detectable. This is the reason for the error type and also why there are some panics which explicitly pass a string rather than a mat64.Error.
Do we want that entire mechanism to go away? I don't use it, but I think it has utility given that we are pretty hard on the use of panic.
Optimize:
I think we're getting close to the core set, but I don't know how Vladimir feels.
Root:
As is should be deleted. I have a long-standing commit there that I haven't thought about in a long time.
On Aug 20, 2015, at 9:55 PM, Vladimír Chalupecký <vladimir....@gmail.com> wrote:Optimize:
I think we're getting close to the core set, but I don't know how Vladimir feels.Yes, I agree that we are almost there, I like what we have. There are some points that I think we should consider (some we have discussed before, some not):* Meaning or interpretation of the various iteration types and what actions we should take for each of them.* I know that I proposed the Needs() method, but I am not very happy with it and I think/hope that we could get rid of it.The optimizers already announce what they need through EvaluationType, so we can allocate upon request in evaluate().
For the initial data we could take them from Settings if provided by the client and InitialIteration has not yet been announced.Maybe there is some issue with this but I feel it might work.* Location.Hessian should be mat64.Symmetric instead of *SymDense so that in the future Hessian could be sparse. But then the question arises who allocates the concrete type. Methods in Init()? Some other way?
Root:As is should be deleted. I have a long-standing commit there that I haven't thought about in a long time.Agreed, delete it. With our experience from optimize we can create a very nice root package.Just my two cents.Vladimir
--
You received this message because you are subscribed to the Google Groups "gonum-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gonum-dev+...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
* I know that I proposed the Needs() method, but I am not very happy with it and I think/hope that we could get rid of it.The optimizers already announce what they need through EvaluationType, so we can allocate upon request in evaluate().The thing about this is that it could possibly fail after several function evaluations, but the check should be as soon as possible. A method may want to do several function evaluations before doing a gradient evaluation.
For the initial data we could take them from Settings if provided by the client and InitialIteration has not yet been announced.Maybe there is some issue with this but I feel it might work.* Location.Hessian should be mat64.Symmetric instead of *SymDense so that in the future Hessian could be sparse. But then the question arises who allocates the concrete type. Methods in Init()? Some other way?It can’t just be Symmetric. At least it has to be a SymRankOner and a RankTwoer. Given that, it’s not clear to me that it should be generic. Will the matrix be kept sparse after many RankTwo updates? Even if it can, we already have the sparse BFGS version — that’s what LBFGS is.
On Aug 20, 2015, at 11:26 PM, Vladimír Chalupecký <vladimir....@gmail.com> wrote:* I know that I proposed the Needs() method, but I am not very happy with it and I think/hope that we could get rid of it.The optimizers already announce what they need through EvaluationType, so we can allocate upon request in evaluate().The thing about this is that it could possibly fail after several function evaluations, but the check should be as soon as possible. A method may want to do several function evaluations before doing a gradient evaluation.And when it requests a gradient evaluation and Location.Gradient is nil, then it would be allocated before doing the evaluation. But it was really just an idea for considering, the case for this is not so strong. LinesearchMethod would have to be changed because it uses non-nillness of Location's fields for deciding the next evaluation type. And then there is the issue of the initial data.For the initial data we could take them from Settings if provided by the client and InitialIteration has not yet been announced.Maybe there is some issue with this but I feel it might work.* Location.Hessian should be mat64.Symmetric instead of *SymDense so that in the future Hessian could be sparse. But then the question arises who allocates the concrete type. Methods in Init()? Some other way?It can’t just be Symmetric. At least it has to be a SymRankOner and a RankTwoer. Given that, it’s not clear to me that it should be generic. Will the matrix be kept sparse after many RankTwo updates? Even if it can, we already have the sparse BFGS version — that’s what LBFGS is.Location.Hessian is the Hessian of the objective function and we do not touch it, so it can be just Symmetric, there is no relation to BFGS. If it were sparse, the linear solve in Newton could make use of the fact, for example.