I'm translating a fairly straightforward optimisation code example from
octave. (Attached - it does a quadratic regression, with a tweaked
regularisation function.)
Both fmin_cg and fmin_bfgs give me poor convergence and this warning:
"Desired error not necessarily achieveddue to precision loss"
This is with various regularisation strengths, with normalised data, and
with high-precision data (float128).
Is there something I can do to enable these to converge properly?
Thanks
Dan
(Using ubuntu 11.04, python 2.7.1, scipy 0.8)
--
Dan Stowell
Postdoctoral Research Assistant
Centre for Digital Music
Queen Mary, University of London
Mile End Road, London E1 4NS
http://www.elec.qmul.ac.uk/digitalmusic/people/dans.htm
http://www.mcld.co.uk/
Anyone got any suggestions about this "precision loss" issue, please?
I found this message from last year, suggesting that using dot instead
of sum might help (yuck):
http://comments.gmane.org/gmane.comp.python.numeric.general/41268
- but no difference here, I still get the optimisation stopping after
three iterations with that complaint.
Any tips welcome
Thanks
Dan
_______________________________________________
SciPy-User mailing list
SciPy...@scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-user
something is wrong with the gradient calculation
If I drop fprime in the call to fmin_bfgs, then it converges after 11
to 14 iterations (600 in the last case)
fmin also doesn't have any problems with convergence
(I'm using just float64)
Josef
Thanks, you're absolutely right. (Also, plain 'fmin' converges easily.)
I've found the problem now. I was assuming that the function would
preserve the shape of my parameter vector (a column vector), whereas it
was feeding a row vector into my functions, causing wrong behaviour. A
bit of reshape fixed it.
Thanks
Dan