Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Exp(A).Exp(B) = ..

1,772 views
Skip to first unread message

Arvind Murugan

unread,
Jan 5, 2003, 3:34:04 AM1/5/03
to sci-physic...@moderators.isc.org

Is this correct:

Exp(A).Exp(B) = Exp(A + B - 1/2 [A,B] ) ?

where [A,B] = AB - BA

It seems to appear in a quite a few physics books but every math book
I look at sticks to the [A,B] = 0 case. The above seems to sort of
work in some physics problems I had to do but then in physics, I
always had [A,B] = a scalar operator (identity * a constant).. I
couldn't verify the general formula using power series. So is it true
in the general case? I suspect this is true only when [A,B] commutes
with A and/or B.. (is one of them implied by the other?)

Arvind

Louis M. Pecora

unread,
Jan 5, 2003, 4:54:58 PM1/5/03
to sci-physic...@moderators.isc.org

In article <88e613c0.03010...@posting.google.com>, Arvind
Murugan <twist_u...@yahoo.com> wrote:

> Is this correct:
>
> Exp(A).Exp(B) = Exp(A + B - 1/2 [A,B] ) ?
>
> where [A,B] = AB - BA

Only if the commutator ([,]) of the matrices produces a matrix that
commutes with A and B. Otherwise the series of matrix commutator in
the exponential in the right hand side is infinite. See the
Cambell-Baker-Hausdorff formula in just about any book on Lie Groups.

I recall that in some physics books A and B might be conjugate
operators (like position and momentum) in QM and then the commutator is
something like the unit matrix times Planck's const over 2 pi, or
something like that. Check that out, don't quote me.

--
Lou Pecora
- My views are my own.

Pierre Asselin

unread,
Jan 5, 2003, 4:59:19 PM1/5/03
to sci-physic...@moderators.isc.org


>Is this correct:

>Exp(A).Exp(B) = Exp(A + B - 1/2 [A,B] ) ?

>where [A,B] = AB - BA

No.


>I suspect this is true only when [A,B] commutes
>with A and/or B..

Correct. Only when [A,B] commutes with A and B.

Alfred Einstead

unread,
Jan 6, 2003, 3:03:02 PM1/6/03
to sci-physic...@moderators.isc.org

twist_u...@yahoo.com (Arvind Murugan) wrote:
> Is this correct:
> Exp(A).Exp(B) = Exp(A + B - 1/2 [A,B] ) ?
> where [A,B] = AB - BA

To find X in Exp(A).Exp(B) = Exp(A + B + X) amounts
to determining
X = log(Exp(A).Exp(B)) - (A + B).
You can systematically find the exact expression, but
I'll show the expansion for the leading few terms only,
up to order 3:

Exp(A).Exp(B) = (1+A+AA/2+AAA/6)(1+B+BB/2+BBB/6)
= 1+(A+B)+(AA+2AB+BB)/2+(AAA+3AAB+3ABB+BBB)/6

Use the expansion log(1+x)=x-xx/2+xxx/3-..., then

log(Exp(A).Exp(B)) =
(A+B)+(AA+2AB+BB)/2+(AAA+3AAB+3ABB+BBB)/6
- 1/2 ((A+B)+(AA+2AB+BB)/2)^2
+ 1/3 ((A+B))^3.
So
log(Exp(A).Exp(B)) - (A+B) =
(AA+2AB+BB)/2 + (AAA+3AAB+3ABB+BBB)/6
- (AA+AB+BA+BB)/2 - (A+B)(AA+2AB+BB)/4 - (AA+2AB+BB)(A+B)/4
+ (AAA+AAB+ABA+ABB+BAA+BAB+BBA+BBB)/3.

The second order terms give you:
1/2 (AA+2AB+BB-AA-AB-BA-BB) = 1/2 (AB-BA) = [A,B]/2.

The third order terms give you:
1/6 (AAA+3AAB+3ABB+BBB)
- 1/4 (AAA+BAA+2AAB+2BAB+ABB+BBB+AAA+AAB+2ABA+2ABB+BBA+BBB)
+ 1/3 (AAA+AAB+ABA+ABB+BAA+BAB+BBA+BBB)
which sorts out to
1/12 (AAB-2ABA+ABB+BAA-2BAB+BBA)
= 1/12 (A(AB-BA)+(BA-AB)A + (AB-BA)B+B(BA-AB))
= 1/12 (A[A,B] - [A,B]A + [A,B]B - B[A,B])
= 1/12 ([A,[A,B]] + [[A,B],B]).
= [A-B,[A,B]]/12
Therefore, to the third order:

Exp(A).Exp(B) = Exp(A + B + [A,B]/2 + [A-B,[A,B]]/12 + ...).

The minimum required to lose the 3rd order term is that

[A-B,[A,B]] = 0.

The higher order terms will involve commutators of various
polynomials in A and B with [A,B]. So the general expression
is:
Exp(A).Exp(B) = Exp(A + B + [A,B]/2 + [p(A,B),[A,B]]),
where
p(A,B) = (A-B)/12 + ...

Pierre Vanhove

unread,
Jan 6, 2003, 6:19:14 PM1/6/03
to Arvind Murugan

> Is this correct:
>
> Exp(A).Exp(B) = Exp(A + B - 1/2 [A,B] ) ?
>
> where [A,B] = AB - BA

The sign is wrong (see below). The general case of this formula is known
under the name of "Baker-Campbell-Hausdorff formula" and you should
find about it in any group theory book (certainly Hammermesch's book).

Its general expression is

(1) e^x e^y = e^z

with

(2) z = x + int_0^1 dt Psi(exp (Ad x) exp (t Ad y)) y

where the Psi function is

(3) Psi(u) = u ln(u)/(u-1) = 1 + 1/2 (u-1) - 1/6 (u-1)^2 +...

And the operator Ad a acts on Lie Algebra elements as

(4) Ad a b = [a,b] = a.b - b.a

(Ad a)^n b is the nth iteration of the commuator (Ad a)^n b
=[a,[a,...,[a,b]]]] with n brakets.

[this is what appears in the general identity

(4b) e^x y e^-x = sum_n>=0 1/n! (Ad x)^n y
]

For your case of interest when x and y commutes with [x,y] (which is
generally the case in Quantum mechanics)

(5) z=x+y+1/2 [x,y]

and

(6) e^x.e^y = e^(x+y+ 1/2 [x,y]) = exp(x+y) exp(1/2 [x,y])

the last eqality is because x and y commutes with [x,y].

Pierre Vanhove

Aaron Bergman

unread,
Jan 6, 2003, 6:24:14 PM1/6/03
to
In article <88e613c0.03010...@posting.google.com>, Arvind Murugan
wrote:

>
>Is this correct:
>
>Exp(A).Exp(B) = Exp(A + B - 1/2 [A,B] ) ?
>
>where [A,B] = AB - BA

Only if all higher commutators vanish, for example, if [a,b] is a
as you say, a c-number. For the general formula, look up the
Baker-Campbell-Hausdorff formula. I'd guess something like
mathworld.wolfram.com would have something on it.

Aaron
--
Aaron Bergman
<http://www.princeton.edu/~abergman/>

Tc Hughes1

unread,
Jan 6, 2003, 6:24:30 PM1/6/03
to
<< Exp(A).Exp(B) = Exp(A + B - 1/2 [A,B] ) ?

where [A,B] = AB - BA >>

Not in general.

<< I suspect this is true only when [A,B] commutes
with A and/or B.. >>

Yes I think it relies on an assumption very near to this. I think it requires
that A and B commute with their commutator.
-Taylor

Gerald F. Thomas

unread,
Jan 6, 2003, 10:37:59 PM1/6/03
to
"Arvind Murugan" <twist_u...@yahoo.com> wrote in message
news:88e613c0.03010...@posting.google.com...

In W.H. Louisell's 'Quantum Statistical Properties of Radiation'
(QC680.L65 1973), Sec. 3.1, Th.4, you'll find a rehash of Glauber's
Phys. Rev. proof that exp(A+B) = exp(A)exp(B)exp(-1/2[A,B]) =
exp(B)exp(A)exp(1/2[A,B])
if [A,B] commutes with A and B. The condition [A,B] = 0 is sufficient
but not necessary for exp(A+B) = exp(A)exp(B).

HTH,
Gerry T.

John Park

unread,
Jan 6, 2003, 10:36:58 PM1/6/03
to
^^^^^^^^^^^^^^^^^

Yes. This is proved in Appendix A of _Principles of Magnetic Resonance_ by
C. P. Slichter (Springer Verlag, 1980).


--John Park

John Baez

unread,
Jan 6, 2003, 10:26:14 PM1/6/03
to
In article <88e613c0.03010...@posting.google.com>,
Arvind Murugan <twist_u...@yahoo.com> wrote:

>Is this correct:
>
>Exp(A).Exp(B) = Exp(A + B - 1/2 [A,B] ) ?
>
>where [A,B] = AB - BA

No. First of all, I think you want a plus sign where you
wrote a minus sign. Second of all, this is the beginning
of the Baker-Campbell-Hausdorff formula, but there are more
terms unless [A,B] commutes with both A and B. In general,
there are infinitely many terms on the right-hand side,
involving fancier and fancier commutators of A's and B's.

>It seems to appear in a quite a few physics books but every math book
>I look at sticks to the [A,B] = 0 case.

If you look at Varadarajan's book on Lie algebras, or
Helgason's book on Lie groups, they will show you the whole
Baker-Campbell-Hausdorff formula. I was unable to find it
on the web for you... Eric Weinstein's "Mathworld" has a
version of it:

http://mathworld.wolfram.com/Baker-Campbell-HausdorffSeries.html

but it's not very nice, because it's not written out
in terms of commutators!



>So is it true
>in the general case? I suspect this is true only when [A,B] commutes

>with A and/or B. (is one of them implied by the other?)

Alas, [A,[A,B]] = 0 does not imply [B,[A,B]] = 0,
so in general you need to check them both.


John Baez

unread,
Jan 6, 2003, 10:32:04 PM1/6/03
to
In article <e58d56ae.03010...@posting.google.com>,
Alfred Einstead <whop...@csd.uwm.edu> wrote:

>Therefore, to the third order:
>
> Exp(A).Exp(B) = Exp(A + B + [A,B]/2 + [A-B,[A,B]]/12 + ...).

... and we leave it as an exercise for the maniacal reader
to related the "12" here to the fact that

zeta(-1) = -1/12

>The minimum required to lose the 3rd order term is that
>
> [A-B,[A,B]] = 0.

Oh, okay: this is better than my requirement that
[A,B] commute with both A and B.

eb...@lfa221051.richmond.edu

unread,
Jan 7, 2003, 4:03:12 PM1/7/03
to sci-physic...@moderators.isc.org

Well, it's a weaker requirement, but it also leads to a weaker conclusion,
right? This requirement only guarantees that the 3rd-order term vanishes,
whereas if you have [A,[A,B]]=[B,[A,B]]=0, then all the higher-order
terms vanish too.

-Ted

--
[E-mail me at na...@domain.edu, as opposed to na...@machine.domain.edu.]

Alfred Einstead

unread,
Jan 9, 2003, 4:27:51 AM1/9/03
to

Pierre Vanhove <pvan...@cern.ch> wrote:

> Its general expression is
> (1) e^x e^y = e^z
> with
> (2) z = x + int_0^1 dt Psi(exp (Ad x) exp (t Ad y)) y
> where the Psi function is
> (3) Psi(u) = u ln(u)/(u-1) = 1 + 1/2 (u-1) - 1/6 (u-1)^2 +...
> And the operator Ad a acts on Lie Algebra elements as
> (4) Ad a b = [a,b] = a.b - b.a

The proof, when all is said and done, relies on a couple
properties. Use ()' to denote the adjoint operator. Then
A'[B] is just [A,B].

(A) exp(-A) B exp(A) = exp(-A')[B].
Take f(s) = exp(-sA) B exp(sA). Then
f(0) = B
f'(s) = -A exp(-sA) B exp(sA) + exp(-sA) B exp(sA) A
= -A f(s) + f(s) A
= -A'[f(s)].

In the theory of operators (like for ordinary variables), the
general solution to
f(0) = B, f'(s) = Q f(s)
is
f(s) = exp sQ [B].
Therefore,
exp(-sA) B exp(sA) = exp(-sA')[B].

(B) exp(-A) d/dt exp(A) = (1 - exp(-A')/A' [dA/dt].
The operator function f(x) = (1 - exp(-x))/x is defined by its
series expansion: f(x) = 1 - x/2 + x^2/3! - x^3/4! + ...

Take f(s) = exp(-sA) d/dt exp(sA). Then, this time f(0) = 0,
and
f'(s) = -A f(s) + exp(-sA) d/dt (A exp(sA))
= -A f(s) + exp(-sA) dA/dt exp(sA) + exp(-sA) A d/dt exp(sA).
But A commutes with exp(-sA). Therefore, the last term is just
A exp(-sA) d/dt exp(sA) = A f(s),
which cancels the first term. Therefore,
f'(s) = exp(-sA) dA/dt exp(sA) = exp(-sA')[dA/dt].

The general solution to the operator equation:
f(0) = 0, f'(s) = exp(-sQ)[B]
is (as the for case of ordinary variables) just
(1 - exp(-sQ))/Q [B]
Therefore,
f(s) = (1 - exp(-sA')/A' [dA/dt].

(C) Since you're looking for an expansion of the form
exp(A) exp(B) = exp(C),
then define C(t) = ln(exp(A)exp(tB)). Then, applying (B) you get:

B = exp(-tB) exp(-A) d/dt (exp(A) exp(tB))
= exp(-C(t)) d/dt exp(C(t))
= (1 - exp(-C(t)')/C(t)' [d/dt C(t)].
= (exp(C(t)') - 1)/(C(t)' exp(C(t)')) [d/dt C(t)].

Inverting this operator, you get:

d/dt C(t) = C(t)' exp(C(t)')/(exp(C(t)') - 1) [B].

Applying (A), you can find the operator exp(C(t)'):

exp(C(t)')[Q] = exp(C(t)) Q exp(-C(t))
= exp(A) exp(tB) Q exp(-tB) exp(-A)
= exp(A) exp(tB')[Q] exp(-A)
= exp(A')[exp(tB')[Q]].
Thus
exp(C(t)') = exp(A') exp(tB'), which we'll call Z
and
C(t)' = ln(exp(A')exp(tB')) = ln(Z)
Therefore,
d/dt C(t) = Z ln(Z)/(Z-1) [B]
C(0) = ln(exp(A)exp(0B)) = ln(exp(A)) = A.
Thus
C(T) = A + integral(t=0 to T): Z ln Z/(Z-1) [B].
At T = 1, this results in:

C = A + integral(t=0 to 1): Z ln Z/(Z-1) [B],

which is the quoted result.

Alfred Einstead

unread,
Jan 9, 2003, 3:04:49 PM1/9/03
to sci-physic...@moderators.isc.org

ba...@galaxy.ucr.edu (John Baez) wrote:
> on the web for you... Eric Weinstein's "Mathworld" has a
> version of [the formula]:
> http://mathworld.wolfram.com/Baker-Campbell-HausdorffSeries.html

You can actually go quite a bit further and define
"warped addition" by:
A [+] B = ln(exp(A) exp(B)).
This has most of the usual properties of addition:
A [+] 0 = A = 0 [+] A,
A [+] -A = 0 = -A [+] A,
A [+] ... (n times) ... A = nA, for n = 2, 3, 4, ...
(A[+]B)[+]C = A[+](B[+]C),
and can define
A [-] B = A [+] -B,
as expected.

Warped addition is actually the operation you're doing when
you add vectors in a curved space, for instance.

The "warped commutator" then is just:
[[A,B]] = (A[+]B) [-] (B[+]A)
= A [+] B [-] A [-] B
= ln(exp(A) exp(B) exp(-A) exp(-B)),

In group theory, the commutator of g and h is defined as
g h g^{-1} h^{-1}. So, the warped commutator is just
the logarithm of the (group-theoretic) commutator
of the exponentials. It expands as

[[A,B]] = [A,B] + 1/6 [A-B,[A,B]] + ...

(The remainder is not of the form [p(A,B),[A,B]] as I
previously said, but more complex).

The original question, rendered here, is what conditions
characterize the equality of the commutators

[[A,B]] = [A,B]?

Having [A,B] commute with A and B is sufficient, but
might not be necessary.

This makes it easier to write down the technically
correct form of the commutator relations (the Weyl
form) for the p,q variables of an n-dimensional
quantum system.

John Baez

unread,
Jan 13, 2003, 10:51:26 PM1/13/03
to
In article <avf6ls$inq$1...@lfa222122.richmond.edu>,
<eb...@lfa221051.richmond.edu> wrote:

>In article <avdhnk$pgh$1...@glue.ucr.edu>, John Baez <ba...@galaxy.ucr.edu> wrote:

>>In article <e58d56ae.03010...@posting.google.com>,
>>Alfred Einstead <whop...@csd.uwm.edu> wrote:

>>>The minimum required to lose the 3rd order term is that
>>>
>>> [A-B,[A,B]] = 0.

>>Oh, okay: this is better than my requirement that
>>[A,B] commute with both A and B.

>Well, it's a weaker requirement, but it also leads to a weaker conclusion,
>right? This requirement only guarantees that the 3rd-order term vanishes,
>whereas if you have [A,[A,B]]=[B,[A,B]]=0, then all the higher-order
>terms vanish too.

Oh, duh. Right. I haven't checked that the higher-terms
NEED NOT vanish if we only have [A-B,[A,B]] = 0, but I'll take
your word for it.


Alfred Einstead

unread,
Jan 15, 2003, 3:32:31 AM1/15/03
to
ba...@galaxy.ucr.edu (John Baez) wrote:

>Ted Bunn, though not cited by "Alfred Einstead", was the one who wrote:

> >>> [A-B,[A,B]] = 0.


> >This requirement only guarantees that the 3rd-order term vanishes,
> >whereas if you have [A,[A,B]]=[B,[A,B]]=0, then all the higher-order
> >terms vanish too.

> Oh, duh. Right. I haven't checked that the higher-terms
> NEED NOT vanish if we only have [A-B,[A,B]] = 0, but I'll take
> your word for it.

But it does raise an interesting point. Some people have said
that exp(A)exp(B) = exp(A+B+[A,B]/2) only if [A,B] commutes with
both A and B. The correct statement is "if", not "only if".
The "only if" part isn't necessarily true.

The general expression is exp(A)exp(B) = exp(A + B + f(A',B')[A,B])
where f(A',B') is a function of the corresponding adjoint operators
A': C |-> [A,C], B': C |-> [B,C]
that starts out as
f(x,y) = 1/2 + (x-y)/12 + ...

The necessary and sufficient condition is that [A,B] be an
eigenvector of the linear operator f(A',B') with eigenvalue 1/2.

0 new messages