Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

positive definite matrix?

5 views
Skip to first unread message

Greg

unread,
Jan 4, 2011, 1:40:07 PM1/4/11
to
Netizens:

I would like to show that if mu > 0 and N is a positive integer and
the NxN square matrix M is defined
M_ij = exp(-mu|i-j|), then M is positive definite. Does anyone know
if this is true or how to prove it?

Greg Spradlin
Embry-Riddle University

Chip Eastham

unread,
Jan 4, 2011, 4:19:13 PM1/4/11
to


Certainly M is symmetric, since exp(A) is symmetric
whenever A is symmetric (consider the power series
expansion, which always converges).

We can omit the scalar -mu from consideration, as
any real scalar power of a sym. pos. def. matrix is
again sym. pos. def. (even negative powers, i.e. taking
inverses).

Now let A be s.t. A_ij = |i-j|. As already noted
this is a real symmetric matrix and has a complete
basis of eigenvectors. M = exp(A) will have the
same basis with eigenvalues exp(lambda) resp. if
A has eigenvalues lambda.

So we just observe exp(lambda) > 0.

regards, chip

Chip Eastham

unread,
Jan 4, 2011, 4:27:24 PM1/4/11
to
On Jan 4, 1:40 pm, Greg <gssprad...@gmail.com> wrote:

Oops, I see now you mean an element by element
exponentiation...

It is still true that M is symmetric, and I
think the pos. definiteness follows from
looking at it as a scaled version of a
Vandermonde matrix:

[Vandermonde matrix -- Wikipedia]
http://en.wikipedia.org/wiki/Vandermonde_matrix

and checking that the leading principal
minors have positive determinants. But
let me polish this thought a bit before
confusing things further.

regards, chip

Ray Koopman

unread,
Jan 4, 2011, 5:45:42 PM1/4/11
to

Yes, it's positive definite.
The inverse of its cholesky factor is bidiagonal. Let r = Exp[-mu].
Then the diagonal is {1, 1/Sqrt[1-r^2], ..., 1/Sqrt[1-r^2]},
and the subdiagonal is {-r/Sqrt[1-r^2], ..., -r/Sqrt[1-r^2]}.

Greg

unread,
Jan 4, 2011, 6:30:22 PM1/4/11
to

Thanks, Ray. If I understand correctly, if you let L be the
bidiagonal matrix with the given diagonal and subdiagonal, then LL^T =
M^(-1). Is there any way to verify this other than grinding it out?
How did you figure it out?

Greg

Ray Koopman

unread,
Jan 5, 2011, 12:33:53 AM1/5/11
to

Sorry, I should have been more specific about what I meant by a
cholesky factor: If K is the cholesky factor of M then K is lower
triangular and M = K K^T, from which it follows that if L = K^(-1)
then M^(-1) = L^T L.

I ground it out for myself when I was a grad student, after
encountering M in a context that implied it was positive definite,
and then reading somewhere that M^(-1) was tridiagonal. Since
then I've learned that M is a special case of a "Toeplitz matrix".
Google for it. There's lots out there.

Ray Vickson

unread,
Jan 5, 2011, 6:24:46 PM1/5/11
to

Why is it necessary to look at the Cholesky factors of the inverse?
The factors of the matrix itself are
easy enough to get directly: if r = exp(-mu) and s = sqrt(1 - r^2),
then A = U^T U, where U is the upper-triangular
matrix
[1 r r^2 r^3 .... r^(n-1) ]
[0 s sr sr^2 ... sr^(n-2) ]
[0 0 s sr ... sr^(n-3) ]
U = [ . . . . . . . . . . . . . . . ]
[ . . . . . . . . . . . . . . . ]
[ . . . . . . . . . . . . . . . ]
[ 0 0 0 0 ... s ]

This comes almost immediately from application of the Cholesky
algorithm.

R.G. Vickson

Ray Koopman

unread,
Jan 5, 2011, 6:55:49 PM1/5/11
to

It isn't. I was just recounting how I did it 50 years ago,
which also happens to be how I remember the result :)

Ray Vickson

unread,
Jan 5, 2011, 10:26:13 PM1/5/11
to

I suppose I should have stated my question as: how the heck do we
compute the Cholesky factors of the inverse? Any algorithms I have
seen have been for A itself, not for inv(A). Are there comparable (and
well-known) algorithms for the inverse?

R.G. Vickson

AP

unread,
Jan 6, 2011, 1:47:52 AM1/6/11
to
On Tue, 4 Jan 2011 10:40:07 -0800 (PST), Greg <gsspr...@gmail.com>
wrote:

more general
M_i,j =x^(|i-j|) , x real
your case is x=exp(-mu)

M_2

1 x
x 1

M_3

1 x x^2
x 1 x
x^2 x 1

M_4

1 x x^2 x^3
x 1 x x^2
x^2 x 1 x
x^3 x^2 x 1

it-is easy to see

det (M_N)=(1-x^2)det(M_(N-1))
replace first col by first col-x*second col

so det (M_N)=(1-x^2)^(N-1)
et if |x|<1 ( which is true in your case because mu >0)
M_N is positive definite

(because the leading principal
minors are M_i)


Ray Koopman

unread,
Jan 6, 2011, 2:34:39 AM1/6/11
to

At this point I really have no idea how I solved it. Maybe I did
get the Cholesky factor of M and then invert it. All I know is
that the note I left for myself referred to the inverse. I think
the problem arose in a course in psychological scaling. M was a
correlation matrix, and M^(-1) was interesting because it could
potentially allow the analyst to decide whether a Thurstonian
common factor model or a Guttmanian simplex model was more
appropriate for the data.

David Bernier

unread,
Jan 6, 2011, 4:50:10 AM1/6/11
to
Ray Koopman wrote:
> On Jan 5, 7:26 pm, Ray Vickson<RGVick...@shaw.ca> wrote:
>> On Jan 5, 3:55 pm, Ray Koopman<koop...@sfu.ca> wrote:
>>> On Jan 5, 3:24 pm, Ray Vickson<RGVick...@shaw.ca> wrote:
>>>> On Jan 4, 9:33 pm, Ray Koopman<koop...@sfu.ca> wrote:
[...]

I was intrigued and found some keywords for a Google search
that I thought I'd share:

Trench algorithm Toeplitz Cholesky statistics

"Trench" is for William Trench, who has written on Toeplitz matrices:
< http://ramanujan.math.trinity.edu/wtrench/research/index.shtml > .

0 new messages