Covariance matrix has negative elements on the diagonal

408 views
Skip to first unread message

Mostafa

unread,
Jul 26, 2019, 4:42:52 PM7/26/19
to lmfit-py
Hi,

Optimization problem:
The model is a superconducting flux qubit, which essentially is a Hamiltonian (60x60 Hermitian matrix) that is constructed and diagonalized to get first two eigenenergies. Then these two eigenenergies are asked to be fit to some experimental data to find the qubit parameters (that are used to construct the Hamiltonian).

Solving the optimization problem:
I use 'nelder' method and have installed . I can find my system parameters for a variety of input data, and I have also numdifftools to calculate covariance matrix of the fit result. I can solve the optimization problem for a variety of data, and it converges and yields reasonable parameter errors.

Error encountered:
For some sets of data (which I'm not sure what is unique about them) I can find the optimal fitting parameters when calc_covar=False, but when calc_covar=False I get the following errors at the end:
invalid value encountered in sqrt par.stderr = sqrt(self.result.covar[ivar, ivar])
invalid value encountered in sqrt (par.stderr * sqrt(self.result.covar[jvar, jvar])))

The minimizer gives the same result as the case when calc_covar=False, but when it tries to calculate them it gives this error. Again this does not occur on all my data, but only some

Main problem:
When looking at MinimizerResult.covar, I see that the covariance matrix has negative elements on the diagonal, which is the culprit. I'm not sure how covariance matrix is calculated in your code, but it should always have positive diagonals, as the diagonal elements are sigma**2 and can not be negative. This is the main problem that I wanted to discuss and would appreciate if you could elaborate.

Unfortunately the code that produces this error is extremely lengthy, and when I tried to reproduce this problem with some simpler models and data to post here I failed. However I guess you just need to check for positivity of the diagonal elements of the covariance matrix, and hopefully nothing fancier should be done.

Thanks for you amazing and user-friendly package. 

Cheers,
Mostafa

Matt Newville

unread,
Jul 27, 2019, 12:48:26 PM7/27/19
to lmfit-py
Hi Mostafa, 


On Fri, Jul 26, 2019 at 3:42 PM Mostafa <mostaf...@gmail.com> wrote:
Hi,

Optimization problem:
The model is a superconducting flux qubit, which essentially is a Hamiltonian (60x60 Hermitian matrix) that is constructed and diagonalized to get first two eigenenergies. Then these two eigenenergies are asked to be fit to some experimental data to find the qubit parameters (that are used to construct the Hamiltonian).

Solving the optimization problem:
I use 'nelder' method and have installed . I can find my system parameters for a variety of input data, and I have also numdifftools to calculate covariance matrix of the fit result. I can solve the optimization problem for a variety of data, and it converges and yields reasonable parameter errors.

Error encountered:
For some sets of data (which I'm not sure what is unique about them) I can find the optimal fitting parameters when calc_covar=False, but when calc_covar=False I get the following errors at the end:
invalid value encountered in sqrt par.stderr = sqrt(self.result.covar[ivar, ivar])
invalid value encountered in sqrt (par.stderr * sqrt(self.result.covar[jvar, jvar])))

The minimizer gives the same result as the case when calc_covar=False, but when it tries to calculate them it gives this error. Again this does not occur on all my data, but only some

Main problem:
When looking at MinimizerResult.covar, I see that the covariance matrix has negative elements on the diagonal, which is the culprit. I'm not sure how covariance matrix is calculated in your code, but it should always have positive diagonals, as the diagonal elements are sigma**2 and can not be negative. This is the main problem that I wanted to discuss and would appreciate if you could elaborate.


Hm, that seems odd.   If I were trying to solve this, I would ask a few questions:

   1.  does it work in the cases that fail with `method='nelder'` to use `method='leastsq'`?
   2.  are the fit results sensible?  
   3.  is the value for the model finite, real, etc?

Unfortunately the code that produces this error is extremely lengthy, and when I tried to reproduce this problem with some simpler models and data to post here I failed. However I guess you just need to check for positivity of the diagonal elements of the covariance matrix, and hopefully nothing fancier should be done.



If you are unable to post an example that reproduces the problem, we really cannot investigate the problem in any detail and can only go on what you tell us. You have not shown a single line of code or output.  The error messages you gave were clipped and not the full messages.  You must have read the instructions for how to ask questions, and you chose to ignore all advice and requests.  

Normally we would be willing to try to help, but if you cannot show any of the information that we asked for, you will have to look into the code and figure it out for yourself.  We would be grateful if you got back to us on what you find.  For reference,  the covariance matrix is calculated in Minimizer._calculate_covariance_matrix() which uses numdifftools to calculate the partial derivatives to construct the covariance matrix.  It's possible that for some conditions the Hessian matrix is unstable and cannot be inverted to a positive definite matrix.   I'm not sure why that would be the case.

--Matt

Mostafa

unread,
Jul 31, 2019, 1:45:22 PM7/31/19
to lmfit-py
Hi Matt,

Thank you very much for your quick and through response.

 1.  does it work in the cases that fail with `method='nelder'` to use `method='leastsq'`?

Yes, it works with leastsq method, although the fitted values are different compared to nelder.

2.  are the fit results sensible?  

They are, in the sense that when you plot them on top of data, it looks alright by eye

 3.  is the value for the model finite, real, etc?

Values for the model are finite and real.

If you are unable to post an example that reproduces the problem, we really cannot investigate the problem in any detail...

I absolutely understand, and I spent these days trying to find a simpler example that reproduces this, without having to paste thousands of lines of propriety (ugh :/) code here, but unfortunately I was not successful. I did read all your instructions, and didn't ignore the parts that I could. The instructions were that if you're not sure, then send a message, so I did instead of filing issues or doing something else.

Normally we would be willing to try to help, but if you cannot show any of the information that we asked for, you will have to look into the code and figure it out for yourself. 

I understand and appreciate your valuable time. I believe at this stage the minimum that can be done is for the minimizer to raise warnings to the user when sigma**2 (diagonals on the covariance matrix) become negative, so that they would know the error results should be taken with a grain of salt.

I will mark this as complete, and appreciate your time and help.

Cheers,
Mostafa

Matt Newville

unread,
Jul 31, 2019, 9:01:27 PM7/31/19
to lmfit-py
On Wed, Jul 31, 2019 at 12:45 PM Mostafa <mostaf...@gmail.com> wrote:
Hi Matt,

Thank you very much for your quick and through response.

 1.  does it work in the cases that fail with `method='nelder'` to use `method='leastsq'`?

Yes, it works with leastsq method, although the fitted values are different compared to nelder.


Well, if leastsq works, I would got with that.  You might look into how different the results are from the two methods compared to the estimated uncertainties.


2.  are the fit results sensible?  

They are, in the sense that when you plot them on top of data, it looks alright by eye

 3.  is the value for the model finite, real, etc?

Values for the model are finite and real.

If you are unable to post an example that reproduces the problem, we really cannot investigate the problem in any detail...

I absolutely understand, and I spent these days trying to find a simpler example that reproduces this, without having to paste thousands of lines of propriety (ugh :/) code here, but unfortunately I was not successful. I did read all your instructions, and didn't ignore the parts that I could. The instructions were that if you're not sure, then send a message, so I did instead of filing issues or doing something else.

Normally we would be willing to try to help, but if you cannot show any of the information that we asked for, you will have to look into the code and figure it out for yourself. 

I understand and appreciate your valuable time. I believe at this stage the minimum that can be done is for the minimizer to raise warnings to the user when sigma**2 (diagonals on the covariance matrix) become negative, so that they would know the error results should be taken with a grain of salt.

I will mark this as complete, and appreciate your time and help.

Cheers,
Mostafa

--
You received this message because you are subscribed to the Google Groups "lmfit-py" group.
To unsubscribe from this group and stop receiving emails from it, send an email to lmfit-py+u...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/lmfit-py/9bf025f8-51e4-4bc0-b141-c9528f267f21%40googlegroups.com.



--Matt
Reply all
Reply to author
Forward
0 new messages