I would like to show that if mu > 0 and N is a positive integer and
the NxN square matrix M is defined
M_ij = exp(-mu|i-j|), then M is positive definite. Does anyone know
if this is true or how to prove it?
Greg Spradlin
Embry-Riddle University
Certainly M is symmetric, since exp(A) is symmetric
whenever A is symmetric (consider the power series
expansion, which always converges).
We can omit the scalar -mu from consideration, as
any real scalar power of a sym. pos. def. matrix is
again sym. pos. def. (even negative powers, i.e. taking
inverses).
Now let A be s.t. A_ij = |i-j|. As already noted
this is a real symmetric matrix and has a complete
basis of eigenvectors. M = exp(A) will have the
same basis with eigenvalues exp(lambda) resp. if
A has eigenvalues lambda.
So we just observe exp(lambda) > 0.
regards, chip
Oops, I see now you mean an element by element
exponentiation...
It is still true that M is symmetric, and I
think the pos. definiteness follows from
looking at it as a scaled version of a
Vandermonde matrix:
[Vandermonde matrix -- Wikipedia]
http://en.wikipedia.org/wiki/Vandermonde_matrix
and checking that the leading principal
minors have positive determinants. But
let me polish this thought a bit before
confusing things further.
regards, chip
Yes, it's positive definite.
The inverse of its cholesky factor is bidiagonal. Let r = Exp[-mu].
Then the diagonal is {1, 1/Sqrt[1-r^2], ..., 1/Sqrt[1-r^2]},
and the subdiagonal is {-r/Sqrt[1-r^2], ..., -r/Sqrt[1-r^2]}.
Thanks, Ray. If I understand correctly, if you let L be the
bidiagonal matrix with the given diagonal and subdiagonal, then LL^T =
M^(-1). Is there any way to verify this other than grinding it out?
How did you figure it out?
Greg
Sorry, I should have been more specific about what I meant by a
cholesky factor: If K is the cholesky factor of M then K is lower
triangular and M = K K^T, from which it follows that if L = K^(-1)
then M^(-1) = L^T L.
I ground it out for myself when I was a grad student, after
encountering M in a context that implied it was positive definite,
and then reading somewhere that M^(-1) was tridiagonal. Since
then I've learned that M is a special case of a "Toeplitz matrix".
Google for it. There's lots out there.
Why is it necessary to look at the Cholesky factors of the inverse?
The factors of the matrix itself are
easy enough to get directly: if r = exp(-mu) and s = sqrt(1 - r^2),
then A = U^T U, where U is the upper-triangular
matrix
[1 r r^2 r^3 .... r^(n-1) ]
[0 s sr sr^2 ... sr^(n-2) ]
[0 0 s sr ... sr^(n-3) ]
U = [ . . . . . . . . . . . . . . . ]
[ . . . . . . . . . . . . . . . ]
[ . . . . . . . . . . . . . . . ]
[ 0 0 0 0 ... s ]
This comes almost immediately from application of the Cholesky
algorithm.
R.G. Vickson
It isn't. I was just recounting how I did it 50 years ago,
which also happens to be how I remember the result :)
I suppose I should have stated my question as: how the heck do we
compute the Cholesky factors of the inverse? Any algorithms I have
seen have been for A itself, not for inv(A). Are there comparable (and
well-known) algorithms for the inverse?
R.G. Vickson
more general
M_i,j =x^(|i-j|) , x real
your case is x=exp(-mu)
M_2
1 x
x 1
M_3
1 x x^2
x 1 x
x^2 x 1
M_4
1 x x^2 x^3
x 1 x x^2
x^2 x 1 x
x^3 x^2 x 1
it-is easy to see
det (M_N)=(1-x^2)det(M_(N-1))
replace first col by first col-x*second col
so det (M_N)=(1-x^2)^(N-1)
et if |x|<1 ( which is true in your case because mu >0)
M_N is positive definite
(because the leading principal
minors are M_i)
At this point I really have no idea how I solved it. Maybe I did
get the Cholesky factor of M and then invert it. All I know is
that the note I left for myself referred to the inverse. I think
the problem arose in a course in psychological scaling. M was a
correlation matrix, and M^(-1) was interesting because it could
potentially allow the analyst to decide whether a Thurstonian
common factor model or a Guttmanian simplex model was more
appropriate for the data.
I was intrigued and found some keywords for a Google search
that I thought I'd share:
Trench algorithm Toeplitz Cholesky statistics
"Trench" is for William Trench, who has written on Toeplitz matrices:
< http://ramanujan.math.trinity.edu/wtrench/research/index.shtml > .