a = rand(4,4);
b = rand(3,3);
p = rand(3,4);
and I am trying to find an analytical proof of the following
statement:
det(eye(size(b))+b*p*inv(a)*p') = det(eye(size(a))+inv(a)*p'*b*p)
By experimentation, this seems to hold. I also assume there is an
existing proof but cannot find it anywhere.
Does anyone know this proof? Or can someone demonstrate it?
Many thanks in advance.
Phil.
I "proved" it symbolically in Maple by just leaving all matrix entries
for a,b,p as independent indeterminates. In that sense, the proof is
easy, but unrevealing.
Remarks:
Firstly, since inv(a) appears in your expressions, but not a, why not
just call it a instead of inv(a)? In fact, the identity holds even if
a is not invertible.
Also, the dimensions 4,3 appear to be irrelevant in the sense that (it
appears that) they can be replaced by arbitrary positive integers m,n
and the identity still holds.
Based on the above remarks, let me restate your problem as a
conjecture. I'll change the notation slightly.
Conjecture:
Let m,n be positive integers.
Let I_m and I_n be the m x m and n x n identity matrices,
respectively.
Let A be any m x m matrix, B any n x n matrix, and P any m x n matrix.
Let P' denote the transpose of P.
Then the following identity holds:
det(I_n + BPAP') = det( I_m + AP'BP)
Remarks:
Using Maple, I verified the conjecture for various pairs (m,n). The
conjecture appears valid.
How about the special case n = 1? I'd expect the proof would be easier
for the special case where B is just a constant.
quasi
Correction:
det(I_m + APBP') = det(I_n + BP'AP)
Noting istvankaster's reply, I see that the conjecture follows
immediately from "Sylvester's determinant theorem".
quasi
I.e. det(I+AB) = det(I+BA). I don't ever recall seeing this called
"Sylvester's determinant theorem". Does anyone know a reference?
The proof is easy: in the polynomial ring Z[Aij,Bij] simply
take the det of (I+AB)A = A(I+BA), and then cancel det(A)
See my prior posts [1] for much more on such "universal" proofs.
Another proof arises from the following Schur decomposition,
but that is far too much effort compared to the above proof.
[ I A ] [ I 0 ] [ I 0 ] [ I A ]
[ ] = [ ] [ ] [ ]
[ B I ] [ B I ] [ 0 I-BA ] [ 0 I ]
[ I A ] [ I-AB 0 ] [ I 0 ]
= [ ] [ ] [ ]
[ 0 I ] [ 0 I ] [ B I ]
--Bill Dubuque
[1] http://google.com/groups?selm=y8zwtm6mnpv.fsf%40nestle.csail.mit.edu
http://google.com/groups?selm=y8zznru8v1d.fsf%40nestle.ai.mit.edu
http://google.com/groups?selm=y8zwtf2am8x.fsf%40nestle.csail.mit.edu
>istvan...@gmail.com wrote:
>>On nov. 9, 11:19, ptres...@googlemail.com wrote:
>>>
>>> I have three matrices (in Matlab notation):
>>>
>>> a = rand(4,4); b = rand(3,3); p = rand(3,4);
>>>
>>> I'm trying to find an analytical proof of the following statement:
>>>
>>> det(eye(size(b))+b*p*inv(a)*p') = det(eye(size(a))+inv(a)*p'*b*p)
>>
>> http://en.wikipedia.org/wiki/Sylvester%27s_determinant_theorem
>
>I.e. det(I+AB) = det(I+BA). I don't ever recall seeing this called
>"Sylvester's determinant theorem". Does anyone know a reference?
>
>The proof is easy: in the polynomial ring Z[Aij,Bij] simply
>
> take the det of (I+AB)A = A(I+BA), and then cancel det(A)
Very nice.
quasi
> istvan...@gmail.com wrote:
>> On nov. 9, 11:19, ptres...@googlemail.com wrote:
>>>
>>> I have three matrices (in Matlab notation):
>>>
>>> a = rand(4,4); b = rand(3,3); p = rand(3,4);
>>>
>>> I'm trying to find an analytical proof of the following statement:
>>>
>>> det(eye(size(b))+b*p*inv(a)*p') = det(eye(size(a))+inv(a)*p'*b*p)
>>
>> http://en.wikipedia.org/wiki/Sylvester%27s_determinant_theorem
>
> I.e. det(I+AB) = det(I+BA). I don't ever recall seeing this called
> "Sylvester's determinant theorem". Does anyone know a reference?
>
> The proof is easy: in the polynomial ring Z[Aij,Bij] simply
>
> take the det of (I+AB)A = A(I+BA), and then cancel det(A)
Nice observation. How do you find these neat proofs? How did you come
to consider (I+AB)A=A(I+BA)? I mean it is clear from hindsight now,
but was there some different thought process you use to find these
"short-cut" proofs?
--
-kira
What is det(A) when A is non-square?
>See my prior posts [1] for much more on such "universal" proofs.
>
>Another proof arises from the following Schur decomposition,
>but that is far too much effort compared to the above proof.
>
> [ I A ] [ I 0 ] [ I 0 ] [ I A ]
> [ ] = [ ] [ ] [ ]
> [ B I ] [ B I ] [ 0 I-BA ] [ 0 I ]
>
> [ I A ] [ I-AB 0 ] [ I 0 ]
> = [ ] [ ] [ ]
> [ 0 I ] [ 0 I ] [ B I ]
It is not too much effort if it works, which it does, whether A is
square or not. It is quite similar to the proof in the paper on
JSTOR cited on Wikipedia: <http://tinyurl.com/37zxgd>.
<http://en.wikipedia.org/wiki/Sylvester's_determinant_theorem>
Rob Johnson <r...@trash.whim.org>
take out the trash before replying
to view any ASCII art, display article in a monospaced font
That makes det(A) = 0. How do you cancel det(A)?
>>> See my prior posts [1] for much more on such "universal" proofs.
>>>
>>> Another proof arises from the following Schur decomposition,
>>> but that is far too much effort compared to the above proof.
>>>
>>> [ I A ] [ I 0 ] [ I 0 ] [ I A ]
>>> [ ] = [ ] [ ] [ ]
>>> [ B I ] [ B I ] [ 0 I-BA ] [ 0 I ]
>>>
>>> [ I A ] [ I-AB 0 ] [ I 0 ]
>>> = [ ] [ ] [ ]
>>> [ 0 I ] [ 0 I ] [ B I ]
>>
>> It is not too much effort if it works, which it does, whether A is
>> square or not. It is quite similar to the proof in the paper on
>> JSTOR cited on Wikipedia: <http://tinyurl.com/37zxgd>.
>> <http://en.wikipedia.org/wiki/Sylvester's_determinant_theorem>
>
>Why do you think it is quite similar?
Setting A_{1,1} and A_{2,2} in the proof in the paper on JSTOR to the
respective identity matrices, the proofs of Sylvester's Identity are
almost the same. The proof in the paper on JSTOR only factors into
two block triangular matrices, but the extra factorization does not
simplify the proof, it just neatens the appearance.
Obviously I meant "0s, 1s" (the above was an incomplete post
that was sent accidentally - I canceled it almost immediately).
I haven't had a chance to ponder further whether the simple proof
can be extended to the non-square case while retaining simplicity.
The point of my original message was to emphasize how simple the
proof for the square case is when performed universally. If you
follow the links in my prior post you'll see that I often promote
such universal proofs here. They illustrate beautifully the power
of the abstract notion of a polynomial.
>> Why do you think it is quite similar?
>
> Setting A_{1,1} and A_{2,2} in the proof in the paper on JSTOR to the
> respective identity matrices, the proofs of Sylvester's Identity are
> almost the same. The proof in the paper on JSTOR only factors into
> two block triangular matrices, but the extra factorization does not
> simplify the proof, it just neatens the appearance.
Could you please be more specific. Precisely what is the proof that
you have in mind?
--Bill Dubuque
> In article <y8z1wax...@nestle.csail.mit.edu>,
> Bill Dubuque <w...@nestle.csail.mit.edu> wrote:
> >r...@trash.whim.org (Rob Johnson) wrote:
> >>Bill Dubuque <w...@nestle.csail.mit.edu> wrote:
> >>>istvan...@gmail.com wrote:
> >>>>On nov. 9, 11:19, ptres...@googlemail.com wrote:
> >>>>>
> >>>>> I have three matrices (in Matlab notation):
> >>>>>
> >>>>> a = rand(4,4); b = rand(3,3); p = rand(3,4);
> >>>>>
> >>>>> I'm trying to find an analytical proof of the following statement:
> >>>>>
> >>>>> det(eye(size(b))+b*p*inv(a)*p') = det(eye(size(a))+inv(a)*p'*b*p)
> >>>>
> >>>> http://en.wikipedia.org/wiki/Sylvester%27s_determinant_theorem
> >>>
> >>> I.e. det(I+AB) = det(I+BA). I don't ever recall seeing this called
> >>> "Sylvester's determinant theorem". Does anyone know a reference?
> >>>
> >>> The proof is easy: in the polynomial ring Z[Aij,Bij] simply
> >>>
> >>> take the det of (I+AB)A = A(I+BA), and then cancel det(A)
> >>
> >> What is det(A) when A is non-square?
> >
> >Append 0's to A,B till square. It doesn't change det of I+AB, I+BA.
>
> That makes det(A) = 0. How do you cancel det(A)?
Perhaps he meant:
(1) prove it for square matrices, as above.
(2) If A and B are non-square, say A is m x n and B is n x m with
m > n, then append m-n columns of 0's to A and m-n rows of 0's to B,
and note that
[ B ]
det(I + AB) = det(I + [A 0] [ 0 ])
[I 0 ] [ B ]
det(I + BA) = det([0 I ] + [ 0 ] [A 0])
--
Robert Israel isr...@math.MyUniversitysInitials.ca
Department of Mathematics http://www.math.ubc.ca/~israel
University of British Columbia Vancouver, BC, Canada
Robert Israel has given a very good explanation of what I assume you
meant by appending 0s. This indeed extends the very simple square
matrix proof to non-square matrices.
>>> Why do you think it is quite similar?
>>
>> Setting A_{1,1} and A_{2,2} in the proof in the paper on JSTOR to the
>> respective identity matrices, the proofs of Sylvester's Identity are
>> almost the same. The proof in the paper on JSTOR only factors into
>> two block triangular matrices, but the extra factorization does not
>> simplify the proof, it just neatens the appearance.
>
>Could you please be more specific. Precisely what is the proof that
>you have in mind?
Substituting I, A, and B, for the appropriate A_{i,j}, the paper
factors
[ I A ] [ I 0 ] [ I A ]
[ ] = [ ] [ ] [1]
[ B I ] [ B I ] [ 0 I-BA ]
Swap corners to get
[ I B ] [ I B ] [ I-BA 0 ]
[ ] = [ ] [ ]
[ A I ] [ 0 I ] [ A I ]
Exchange A and B to get the factorization
[ I A ] [ I A ] [ I-AB 0 ]
[ ] = [ ] [ ] [2]
[ B I ] [ 0 I ] [ B I ]
Combining [1] and [2], we get that det(I-AB) = det(I-BA).
Expanding on this a bit, suppose A is mxn and B is nxm where n >= m.
Using this result, it is pretty simple to show that
n-m
det(Ix - BA) = x det(Ix - AB)
That is, the characteristic polynomial of BA is x^{n-m} times that of
AB. I assume this is well known, does it have a name? I have tried
looking a bit, but I have not found anything yet.
Indeed, that's what I meant, but I was not happy with the fact
that the proof separates into cases for square vs. non-square.
So I was searching for a unified proof by appending 0s and 1s.
Soon after I realized a twist on appending 0s does indeed work.
The appended 0s should be distinct indeterminates that are all
set to 0 at the end, _after_ canceling det(A) [which is nonzero,
since A is a "generic" matrix of distinct indeterminates]. This
provides an excellent illustration of the power of polynomials
and universal reasoning. For further examples see my prior posts
http://google.com/groups?selm=y8zznru8v1d.fsf%40nestle.ai.mit.edu
--Bill Dubuque
I have found a reference for the m = n case. In "The Characteristic
Polynomial of a Product", Ralph Howard calls this "the most notorious
of all qualifying exam questions":
<http://www.math.sc.edu/~howard/Classes/700/charAB.pdf>
Wikipedia mentions the full result, but only sketches a proof of the
m = n case:
<http://en.wikipedia.org/wiki/Characteristic_polynomial>