Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

nlparci returns NaN

600 views
Skip to first unread message

Klop Oray

unread,
Sep 5, 2011, 11:21:10 AM9/5/11
to
Hello,

I'm trying to fit some data with lsqnonlin. I have 57 arguments in the objective function, and 1708 data entries. I've run the lsqnonlin and got the results:
[x,resnorm,residual,exitflag,output,lambda,jacobian] = lsqnonlin(@model.objective_hdl,x0,lb,ub,options,model);

Which I then fed to nlparci:
ci = nlparci(x,residual,'jacobian',jacobian);

The residual array is 1708x1 and the jacobian is 1708x57 sparse matrix. Both look legit and do not have any Infs or NaNs. I'm also plotting the x so I know it's fine too.

The ci however is a 57x2 matrix, with the first 29 rows filled with NaNs.

Thanks in advance to anyone who can help,
Best regards,
Slava

Klop Oray

unread,
Sep 10, 2011, 12:14:28 PM9/10/11
to
Update:

After further investigation I found that the problem arises in nlparci during matrix inversion:

% Calculate covariance matrix
[~,R] = qr(J,0);
Rinv = R\eye(size(R)); %<===== Problem is here, R is singular to working precision - inversion produces bad results
diag_info = sum(Rinv.*Rinv,2);

I noticed that many of the jacobian (J) entries contain zeroes. Does it mean that the arguments are not "significant" for the fit - i.e. that a change in those arguments does not produce any effect on the quality of the fit, and thus the model I'm using is, perhaps, not the best one for the task, and could be significantly simplified? Is this the only solution to this problem, or are there some guidelines for "normalizing" the jacobian?

Thanks.

Bruno Luong

unread,
Sep 10, 2011, 12:49:28 PM9/10/11
to
"Klop Oray" <klp....@gmail.com> wrote in message <j4g2d4$i0n$1...@newscl01ah.mathworks.com>...
Typically it can be this or other reason, such as

1) undetermined system - for example if your measure depend on the difference of two quantities and you want to know both quantities.

2) the parameters are badly scaled : you combine very sensitive parameters with very insensitive parameters.

3) any other foolish thing you could do with the model...

There is no miracle receipt, or generic guideline that can be applied in all cases. You just need to know what you are doing.

Bruno

Klop Oray

unread,
Sep 10, 2011, 1:34:10 PM9/10/11
to
> Typically it can be this or other reason, such as
>
> 1) undetermined system - for example if your measure depend on the difference of two quantities and you want to know both quantities.

I don't think this is correct in my case - I use a sum of two exponents - one is slow and stretched, and the other is fast, and I ensure those qualities by emposing very different boundaries.

>
> 2) the parameters are badly scaled : you combine very sensitive parameters with very insensitive parameters.
>

I thought that way at first too, since like I said, I have a slow and a fast exponent. However, scaling didn't solve this problem (all it did was to reduce the accuracy of the computation of the jacobian - the more I scaled the less my computed jacobian and the finite differences jacobian agreed).

> 3) any other foolish thing you could do with the model...
>

That is what I'm looking for, more foolish things I didn't consider (the scaling and centering was mentioned in the Matlab documentation, that is why I was asking about some guidelines paper listing some things to look out for.

Bruno Luong

unread,
Sep 10, 2011, 2:03:13 PM9/10/11
to
"Klop Oray" <klp....@gmail.com> wrote in message <j4g72i$3n2$1...@newscl01ah.mathworks.com>...

>
> I thought that way at first too, since like I said, I have a slow and a fast exponent. However, scaling didn't solve this problem (all it did was to reduce the accuracy of the computation of the jacobian - the more I scaled the less my computed jacobian and the finite differences jacobian agreed).

The good scaling must be relative to the change of the output through the model, in other word you must scale so that output change by more or less the same amount for each parameters.

Mathematicians often call it "right-preconditioning". It can be an art by itself. Easy to say than to do.

Bruno

Klop Oray

unread,
Sep 29, 2011, 10:15:31 AM9/29/11
to
> Mathematicians often call it "right-preconditioning". It can be an art by itself. Easy to say than to do.

Can you recommend a text dealing with right-preconditioning?

Nathan Orloff

unread,
Aug 7, 2013, 11:11:11 AM8/7/13
to
I read this post and end up not finding the suggestions helpful.

I did figure out the issue. Here is a step by step suggestion:

1) Use the symbolic toolbox to write up your model.
2) Compute the analytical expressions for the partial derivative manually with the Jacobian function. Called like jfunc = jacobian(func,<your fit params here>);
3) Now evaluate the jacobian at your fit parameters.
4) Plot the jacobian.
5) Replace NANs with "eps". J(isnan(J)) = eps.

In my case, my model was a complex function that I had split into real an imaginary parts. So I then plotted the magnitude of the jacobian and the phase. It was clear that the phase was running through a branch cut so I had to fix that with unwrap. I fixed the phase and this made my problem go away.

So in summary. You should look very carefully at the Jacobian. If there are NANs or numbers near zero you probably just have an issue with your model at those regions in your fit. I hope this help.

Nate

Eric Copenhaver

unread,
Jan 17, 2017, 10:02:06 PM1/17/17
to
Hi all! I've run into this issue a few times recently and each time it was due to a silly mistake on my part. Across iterations I had changed the length of the parameter array but not cleared the variable. In particular, I reduced the number of parameters in the parameter array I passed into nlparci. Then nlparci tries to vary a parameter that doesn't even show up in the fit function, resulting in immeasurable changes in the fit function when that parameter was varied. Simply clearing the parameters variable fixed the problem for me. I hope it's as easy for the rest of you!

"Klop Oray" wrote in message <j42pd6$bjl$1...@newscl01ah.mathworks.com>...
0 new messages