Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Singular matrix in Invert_RtR

4,074 views
Skip to first unread message

Dan Hatton

unread,
Aug 7, 2007, 10:28:13 AM8/7/07
to

Dear All,

With a particulalrly tricky fitting problem, I keep getting the error
message

Singular matrix in Invert_RtR

just after the fit converges.

I have a vague feeling that this might be related to the number of
iterations that it took for the fit to converge being smaller than the
number of independent components of the "correlation matrix" (in fact,
it's even smaller than the number of parameters.) Two questions:

Am I anywhere near correct in this?

Is there something like a FIT_MINITER setting that might stop it
happening?

Thank you very much.

--

Dan Hatton

<http://www.bib.hatton.btinternet.co.uk/dan/>

Hans-Bernhard Bröker

unread,
Aug 8, 2007, 7:05:36 AM8/8/07
to
Dan Hatton wrote:

> I have a vague feeling that this might be related to the number of
> iterations that it took for the fit to converge being smaller than the
> number of independent components of the "correlation matrix" (in fact,
> it's even smaller than the number of parameters.)

That's quite certainly not the case. Each parameter is varied in every
iteration.

"Singular Matrix" often means that you have specified parameters that
the fit doesn't depend on at all. Other possible causes include
parameter that start out exactly zero.

Dan Hatton

unread,
Aug 9, 2007, 10:58:47 AM8/9/07
to
On Wed, 8 Aug 2007, Hans-Bernhard Bröker wrote:

> That's quite certainly not the case. Each parameter is varied in
> every iteration.

It seems not. One of my eight parameters hasn't changed at all in the
four iterations it took for the fit to converge. And between the last
two iterations, only one parameter changed.

> "Singular Matrix" often means that you have specified parameters
> that the fit doesn't depend on at all. Other possible causes
> include parameter that start out exactly zero.

I don't have a parameter that's exactly zero; I've forced them all to
be within a couple of orders of magnitude of the same size. But I do
have something that's exactly zero:

final sum of squares of residuals : 8.64354e+12
rel. change during last iteration : 0

Does that ring any bells?

--

Thanks,

Dan

Hans-Bernhard Bröker

unread,
Aug 9, 2007, 11:54:44 AM8/9/07
to
Dan Hatton wrote:
> On Wed, 8 Aug 2007, Hans-Bernhard Bröker wrote:
>
>> That's quite certainly not the case. Each parameter is varied in
>> every iteration.

> It seems not. One of my eight parameters hasn't changed at all in the
> four iterations it took for the fit to converge.

That's a sign of trouble. It means that the first of my suggestions
about "singular matrix" error:

>> "Singular Matrix" often means that you have specified parameters
>> that the fit doesn't depend on at all.

applies to your fit. I.e. if the parameter doesn't affect the quality
of the fit within the search area, then it can't be changed in any way.
As a consequence, the error for that parameter is infinite. It's this
infinity that blows up the matrix inversion.

> final sum of squares of residuals : 8.64354e+12
> rel. change during last iteration : 0

> Does that ring any bells?

No. The delta-chisquare isn't used during the Invert_RtR step that's
bombing out.

Dr Engelbert Buxbaum

unread,
Aug 9, 2007, 2:37:37 PM8/9/07
to
Am 07.08.2007, 10:28 Uhr, schrieb Dan Hatton <vi5u0-...@yahoo.co.uk>:

>
> Dear All,
>
> With a particulalrly tricky fitting problem, I keep getting the error
> message
>
> Singular matrix in Invert_RtR
>
> just after the fit converges.

The problem is that Gnuplot's fit uses a procedure known as
Marquard-Levenberg algorithm, which is sometimes very ill-behaved. There
is an alternative fitting algorithm, Simplex by Nelson & Mead, which is
much more stable and can also minimise Chi^2 or the median of residuals in
those cases where minimising the sum of squares is statistically
inappropriate. Unfortunately, Simplex can not directly calculate the
standard deviations for the parameters, that is probably the reason why
many scientific fitting programs (like Gnuplot) do not use it.

Byte Magazine in 1985 had a nice paper on Simplex by Caceci & Cacheris
together with an implementation of that algorithm.

Sometimes it is possible to use Marquard with the parameters estimated by
Simplex in order to get the standard deviations of the params, but
frequently one runs into either of two problems:
- the fit aborts with a singular matrix
- the sum of squares for the parameters calculated by Marquard is several
times larger than that calculated by Simplex

The alternative is to bite the bullet and calculate standard deviations by
bootstrapping, that is computationally expensive but after all we no
longer use 8088 processors.

Dan Hatton

unread,
Aug 10, 2007, 7:57:42 AM8/10/07
to
On Thu, 9 Aug 2007, Hans-Bernhard Bröker wrote:

> That's a sign of trouble. It means that the first of my suggestions
> about "singular matrix" error:

>>> "Singular Matrix" often means that you have specified parameters
>>> that the fit doesn't depend on at all.

> applies to your fit. I.e. if the parameter doesn't affect the
> quality of the fit within the search area, then it can't be changed
> in any way. As a consequence, the error for that parameter is
> infinite. It's this infinity that blows up the matrix inversion.

Thank you very much. So would I be right to describe my problem as
follows? The Levenberg-Marquardt algorithm needs something else to
tell it the first derivatives of a residual with respect to the
parameters. Gnuplot is providing that "something else," by imposing a
finite change (`search area') Deltap in the parameter, calculating the
resulting change Deltar in the residual, and estimating the derivative
by the ratio Deltar/Deltap. I'm running into trouble because this
estimate of the derivative is zero.

However, I'm _very_ sure the derivative isn't really zero.

A new question occurs to me: is Deltar really calculated as the change
in the residual, or just as the change in the fitting function? This
is important, because the relevant parameter occurs in the "using"
specifier, not in the fitting function.

--

Thanks

Dan

Hans-Bernhard Bröker

unread,
Aug 10, 2007, 12:11:12 PM8/10/07
to
Dan Hatton wrote:

> Thank you very much. So would I be right to describe my problem as
> follows? The Levenberg-Marquardt algorithm needs something else to
> tell it the first derivatives of a residual with respect to the
> parameters. Gnuplot is providing that "something else," by imposing a
> finite change (`search area') Deltap in the parameter, calculating the
> resulting change Deltar in the residual, and estimating the derivative
> by the ratio Deltar/Deltap. I'm running into trouble because this
> estimate of the derivative is zero.

..., if summed over all data points in the input.

The matrix being inverted here is the product of the matrix of
derivatives, \partial r/\partial p, with itself. I.e. the matrix has to
have a rank smaller than the number of parameters for this to fail.

> However, I'm _very_ sure the derivative isn't really zero.

Well, since you didn't provide an actual example, we have no choice but
to trust your word on this.

> A new question occurs to me: is Deltar really calculated as the change
> in the residual, or just as the change in the fitting function? This
> is important, because the relevant parameter occurs in the "using"
> specifier, not in the fitting function.

Ahhh-ha! No, we don't re-read the data file (and thus don't re-apply
the 'using' transformation) for every single iteration. Doing that
would be rather wasteful. Which means that the derivative will indeed
be zero, and thus the fit has to fail. You should move the
transformations from the 'using' specification into the actual target
function.

Dan Hatton

unread,
Aug 15, 2007, 9:26:33 AM8/15/07
to
On Fri, 10 Aug 2007, Hans-Bernhard Bröker wrote:

> Dan Hatton wrote:

>> The Levenberg-Marquardt algorithm needs something else to tell it
>> the first derivatives of a residual with respect to the parameters.

>> I'm running into trouble because this estimate of the derivative is
>> zero.

>> However, I'm _very_ sure the derivative isn't really zero.


>
> Well, since you didn't provide an actual example, we have no choice but to
> trust your word on this.

>> A new question occurs to me: is Deltar really calculated as the change
>> in the residual, or just as the change in the fitting function? This
>> is important, because the relevant parameter occurs in the "using"
>> specifier, not in the fitting function.

> No, we don't re-read the data file (and thus don't re-apply the

> 'using' transformation) for every single iteration.

That's my plan (somewhat) foiled, then. I was trying to do maximum
likelihood, not just least squares, by something like

fit anyoldfunction(a,b,x) "anyolddata.dat" using ($1):(anyoldfunction(a,b,$1)+sqrt(log(2.0*pi)+2.0*lsigma+($2-anyoldfunction(a,b,$1))**2.0*exp(-2.0*lsigma))):(1.0) via a,b,lsigma

where lsigma is the log of an initially unknown measurement error.
The WSSR is supposed come out as -2 times the log likelihood.

I couldn't think how to incorporate the role of lsigma in the fitting
function, so I've found a solution outside of Gnuplot.

Thanks for all the help.

--

Regards,

Dan

0 new messages