[SciPy-User] fmin_cg fmin_bfgs "Desired error not necessarily achieveddue to precision loss"

1,726 views
Skip to first unread message

Dan Stowell

unread,
Nov 19, 2011, 2:19:47 PM11/19/11
to scipy...@scipy.org
Hi,

I'm translating a fairly straightforward optimisation code example from
octave. (Attached - it does a quadratic regression, with a tweaked
regularisation function.)

Both fmin_cg and fmin_bfgs give me poor convergence and this warning:

"Desired error not necessarily achieveddue to precision loss"

This is with various regularisation strengths, with normalised data, and
with high-precision data (float128).

Is there something I can do to enable these to converge properly?

Thanks
Dan

(Using ubuntu 11.04, python 2.7.1, scipy 0.8)

--
Dan Stowell
Postdoctoral Research Assistant
Centre for Digital Music
Queen Mary, University of London
Mile End Road, London E1 4NS
http://www.elec.qmul.ac.uk/digitalmusic/people/dans.htm
http://www.mcld.co.uk/

arcsml.py
data.csv

Dan Stowell

unread,
Nov 23, 2011, 4:41:57 AM11/23/11
to scipy...@scipy.org
(Bump)

Anyone got any suggestions about this "precision loss" issue, please?

I found this message from last year, suggesting that using dot instead
of sum might help (yuck):
http://comments.gmane.org/gmane.comp.python.numeric.general/41268

- but no difference here, I still get the optimisation stopping after
three iterations with that complaint.

Any tips welcome

Thanks
Dan

_______________________________________________
SciPy-User mailing list
SciPy...@scipy.org
http://mail.scipy.org/mailman/listinfo/scipy-user

josef...@gmail.com

unread,
Nov 23, 2011, 9:48:50 AM11/23/11
to SciPy Users List
On Wed, Nov 23, 2011 at 4:41 AM, Dan Stowell
<dan.s...@eecs.qmul.ac.uk> wrote:
> (Bump)
>
> Anyone got any suggestions about this "precision loss" issue, please?
>
> I found this message from last year, suggesting that using dot instead
> of sum might help (yuck):
> http://comments.gmane.org/gmane.comp.python.numeric.general/41268
>
> - but no difference here, I still get the optimisation stopping after
> three iterations with that complaint.

something is wrong with the gradient calculation

If I drop fprime in the call to fmin_bfgs, then it converges after 11
to 14 iterations (600 in the last case)

fmin also doesn't have any problems with convergence

(I'm using just float64)

Josef

Dan Stowell

unread,
Nov 23, 2011, 10:02:09 AM11/23/11
to SciPy Users List
On 23/11/2011 14:48, josef...@gmail.com wrote:
> On Wed, Nov 23, 2011 at 4:41 AM, Dan Stowell
> <dan.s...@eecs.qmul.ac.uk> wrote:
>> (Bump)
>>
>> Anyone got any suggestions about this "precision loss" issue, please?
>>
>> I found this message from last year, suggesting that using dot instead
>> of sum might help (yuck):
>> http://comments.gmane.org/gmane.comp.python.numeric.general/41268
>>
>> - but no difference here, I still get the optimisation stopping after
>> three iterations with that complaint.
>
> something is wrong with the gradient calculation
>
> If I drop fprime in the call to fmin_bfgs, then it converges after 11
> to 14 iterations (600 in the last case)
>
> fmin also doesn't have any problems with convergence
>
> (I'm using just float64)
>
> Josef

Thanks, you're absolutely right. (Also, plain 'fmin' converges easily.)

I've found the problem now. I was assuming that the function would
preserve the shape of my parameter vector (a column vector), whereas it
was feeding a row vector into my functions, causing wrong behaviour. A
bit of reshape fixed it.

Thanks
Dan

Reply all
Reply to author
Forward
0 new messages