Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

proof of det(exp(A)) = exp(tr(A))

3,481 views
Skip to first unread message

Ben Bullock

unread,
May 30, 1995, 3:00:00 AM5/30/95
to
I am looking for a proof of the expression

det(exp(A)) = exp(tr(A))

where A is a matrix, det represents the determinant, exp represents the
exponential, and tr represents the trace.

[By the way, the exponential of a matrix is defined by

exp(A) = 1 + A + 1/2 A * A + 1/3! A * A * A + ...

where the symbol * represents matrix multiplication.]

Thanks very much for any help with this problem.

--
Ben Bullock @ KEK (National Laboratory for High Energy Physics) /
address: 1-1 Oho, Tsukuba, Ibaraki 305, JAPAN / TEL: 0298-64-5403 /
FAX: 0298-64-7831 / e-mail: b...@theory.kek.jp / DECNET: KEKVAX::BEN
[in Japanese]: ベン・ブロック@高エネルギー物理学研究所 (つくば)

聞くは一時の恥、聞かぬは末代の恥。

Zdislav V. Kovarik

unread,
May 30, 1995, 3:00:00 AM5/30/95
to
In article <3qeehv$j...@keknews.kek.jp>, Ben Bullock <b...@theory1.kek.jp> wrote:
>I am looking for a proof of the expression
>
> det(exp(A)) = exp(tr(A))
>
>where A is a matrix, det represents the determinant, exp represents the
>exponential, and tr represents the trace.
[...]

(1) Special case: A is (complex) upper triangular with diagonal entries
a_jj, j=1, ..., n. Then the corresponding diagonal entries of exp(A)
are exp(a_jj) and the proof of the formula is obvious.

(2) General case: every (square complex matrix) A can be written as
URU^(-1) where U is unitary, R upper triangular (Schur
decomposition). Then exp(A)=U exp(R) U^(-1), so
det (exp (A)) = det(exp(R)) and
trace(A)=trace(R), hence the conclusion.

(Determinant of the product is the product of determinants, and
trace(PQ)=trace(QP) (proved directly), so
trace (URU^(-1)) = trace (U^(-1)UR) = trace(R).)

There are more elegant eigenvalue-free proofs, but this one is quite
transparent, I hope.

ZVK (Slavek).

Richard Larson

unread,
May 30, 1995, 3:00:00 AM5/30/95
to
In article <3qeehv$j...@keknews.kek.jp>, b...@theory1.kek.jp (Ben Bullock) says:
>
>I am looking for a proof of the expression
>
> det(exp(A)) = exp(tr(A))
>
>where A is a matrix, det represents the determinant, exp represents the
>exponential, and tr represents the trace.
>
The simplest proof probably runs something like:
It's sufficient to prove it when A is in Jordan normal form, since every
matrix is conjugate to its J.n.f. and det and tr are preserved by
conjugation, and in J.n.f. it's simply the fact that prod(exp(x_i)) =
exp(sum x_i).

------------------------------------------------------------------------
Richard G. Larson, Professor, | U. of Illinois at Chicago
Department of Mathematics, Statistics, | 851 S. Morgan St, M/C 249
and Computer Science | Chicago IL 60607-7045
* PGP public key: finger R...@uic.edu * | R...@uic.edu (312) 996-8616

Robert Israel

unread,
May 30, 1995, 3:00:00 AM5/30/95
to
In article <3qeehv$j...@keknews.kek.jp>, b...@theory1.kek.jp (Ben Bullock) writes:
|> I am looking for a proof of the expression
|>
|> det(exp(A)) = exp(tr(A))
|>
|> where A is a matrix, det represents the determinant, exp represents the
|> exponential, and tr represents the trace.

Well, one way is using Jordan canonical form. Another way is to prove that
both det(exp(tA)) and exp(tr(tA)) satisfy the differential equation
y' = tr(A) y. The key here is that det(I+tA) = 1 + t tr(A) + O(t^2)
as t -> 0, which follows from the representation of the determinant as a sum
over permutations of +-1 times products of matrix elements.

--
Robert Israel isr...@math.ubc.ca
Department of Mathematics
University of British Columbia
Vancouver, BC, Canada V6T 1Y4

Ben Bullock

unread,
May 31, 1995, 3:00:00 AM5/31/95
to
I would like to thank Eric Fresse, Thomas Andrews, Kin Yan Chung, Tom
Foregger, Robert Israel, Zdislav V. Kovarik and Richard Larson for
their help in solving my problem. Thankyou very much indeed.

Kirk Lougheed

unread,
May 31, 1995, 3:00:00 AM5/31/95
to
In article <3qeehv$j...@keknews.kek.jp>, Ben Bullock <b...@theory1.kek.jp> wrote:
>I am looking for a proof of the expression
>
> det(exp(A)) = exp(tr(A))
>

If you're working with complex numbers, diagonalize the matrix.
The trace is then the sum of the eigenvalues and e^Tr(A) becomes the
product of the exponentiated eigenvalues. The determinant is likewise
the product of the exponentiated eigenvalues.

If you're working with real numbers you may not be able to diagonalize
the matrix, but you can always convert it to Jordan canonical form.
A matrix in this form will have eigenvalues along the diagonal with the
occasional one in the diagonal row just above the central diagonal.
See any reasonable text on linear algebra for the justification.
The trace term is the same and you can easily convince yourself that
those extra ones don't contribute to the determinant.

Regards,
Kirk

Daniel A. Asimov

unread,
May 31, 1995, 3:00:00 AM5/31/95
to
In article <3qeehv$j...@keknews.kek.jp>, b...@theory1.kek.jp (Ben Bullock) writes:
> I am looking for a proof of the expression
>
> det(exp(A)) = exp(tr(A))
>
> where A is a matrix, det represents the determinant, exp represents the
> exponential, and tr represents the trace.
-------------------------------------------------------------------------

Assume first that A is diagonalizable, so that BAB^-1 = D (diagonal)
for some invertible matrix B. Then the formula follows easily.

But diagonalizable matrices form an open dense set among all matrices.
So by the continuity of det, exp, and tr, the same formula must also
hold for non-diagonalizable matrices as well.

Daniel Asimov
Senior Computer Scientist

Mail Stop T27A-1
NASA Ames Research Center
Moffett Field, CA 94035-1000

asi...@nas.nasa.gov
(415) 604-4799 w
(415) 604-3957 fax

Hubert HOLIN

unread,
Jun 1, 1995, 3:00:00 AM6/1/95
to
If your matrix elements are in C, triangularize. The rest is trivial.


Kirk Lougheed

unread,
Jun 1, 1995, 3:00:00 AM6/1/95
to
In article <lougheedD...@netcom.com>,
Kirk Lougheed <loug...@netcom.com> wrote:
> [snip]

>If you're working with complex numbers, diagonalize the matrix.
> [snip]

>If you're working with real numbers you may not be able to diagonalize
>the matrix, but you can always convert it to Jordan canonical form.
> [snip]

In some email that showed up at my site before his posting, Dik Winter
pointed out that the above statement regarding complex matrices always
being diagonalizable is nonsense. What working in the complex field
buys you is the existence of at least one eigenvalue. It most
definitely doesn't guarantee enough independent eigenvectors to
diagonalize the matrix.

So skip straight to the Jordan normal form....

Kirk

Marco Moriconi

unread,
Jun 3, 1995, 3:00:00 AM6/3/95
to
You can prove that using the triangular form of A.

Marco Moriconi
mori...@phoenix.princeton.edu

Michael K. Murray

unread,
Jun 3, 1995, 3:00:00 AM6/3/95
to
The proof I have always seen goes as follows:

1) Note that matrices with distinct eigenvalues
are dense in all matrices. Ie if I take a matrix
an arbitrarily small changes in its entries will
make its eigenvalues distinct.

2) If two continous functions on a metric space
agree on a dense subspace they agree.

3) Clearly both sides of the required equation
are continous functions on matrices.

Matrices with distinct eigenvalues are diagonalisable
so diagonalise and prove the identity on the space
of diagonalisable matrices. By 1, 2, and 3 it
is true.

Mcihael

IMRE BOKOR

unread,
Jun 7, 1995, 3:00:00 AM6/7/95
to
Kirk Lougheed (loug...@netcom.com) wrote:
: In article <lougheedD...@netcom.com>,

You don't really need the Jordan normal form. It's enough to
know that the coefficients of the characteristic polynomial
are the appropriate symmetric homogeneous polynomials in
the eigenvalues, and that these lie in the field even if
there are no eigenvectors in the field, such as
in the case of the real matrix

1 -1

1 1

d.A.


Jan Willem Nienhuys

unread,
Jun 9, 1995, 3:00:00 AM6/9/95
to
ibo...@metz.une.edu.au (IMRE BOKOR) writes:

>You don't really need the Jordan normal form. It's enough to
>know that the coefficients of the characteristic polynomial
>are the appropriate symmetric homogeneous polynomials in
>the eigenvalues, and that these lie in the field even if
>there are no eigenvectors in the field, such as
>in the case of the real matrix

>1 -1

>1 1

I always thought that the standard proof of the subject line identity
went as follows.

exp(At)' = A exp(At).
so det( exp (At))' = (simple linear algebra calculation ) =
tr(A) det (exp (At)).
moreover for t=0 we get det(exp(A0))=1, hence
det (exp (At)) = exp (tr(A) t). Now plug in t=1.

As for the simple linear algebra calculation, choose any basis
e_1 ,..., e_n,
then write X= exp(At) for brevity, and
det(X) = det (Xe_1 ,..., Xe_n), so
det(X)' = sum det ( Xe_1 ,... X'e_i,...Xe_n) =
sum det ( Xe_1 ,... AXe_n,... Xe_n)=
tr(A) det (Xe_1 ,..., Xe_n)

I missed the beginning of the thread. Maybe the question was
how to prove the identity without differential equations.

On the other hand, an identity involving trace and determinant
is most elegantly proved by using the properties of det and tr,
and almost any question involving exp can be most elegantly
solved by differential equations, i.e. the property that
exp(x)' = exp(x). That's a fact of life.

JWN


>d.A.


Ben Bullock

unread,
Jun 12, 1995, 3:00:00 AM6/12/95
to
Jan Willem Nienhuys (wsa...@urc.tue.nl) wrote:

> I missed the beginning of the thread. Maybe the question was
> how to prove the identity without differential equations.

FWIW I started this thread, and I didn't specify any method of proof.

Thanks once again to all the people who have sent me answers or posted
them, and I am sorry if I left your name off my original list due to
time delays in receiving it here.

0 new messages