# Matrices: If AB = I, then BA = I.

21 views

### JRic395488

Mar 14, 1999, 3:00:00 AM3/14/99
to
I don't claim that the proof offered is original, though I don't recall seeing
it before. It's more conceptual (I think) than the crunchy
systems-of-equations proofs found in some linear algebra books. It is not hard
to see that the reasons invoked do not rely on what is to be shown.

Suppose that AB = I, with A and B n x n matrices. Then A times the kth column
of B yields the kth standard basis vector, and thus the columns of A span R^n.
Since the smallest size of a spanning set of R^n is n, the columns of A form a
basis. Now ABA = IA = A. Since the representation of a vector in terms of a
basis is unique, it follows that the kth column of BA must be the kth standard
basis vector. That is, BA = I. QED.

John Rickert

### Hope Hubris

Mar 14, 1999, 3:00:00 AM3/14/99
to

AB = I
(AB)A = A
A(BA) = A

There is only one matrix X such that AX = A, and that is the identity matrix
I. Therefore

BA = I

QED

Hope Hubris
Tyrant of Jupiter

### Chan-Ho Suh

Mar 14, 1999, 3:00:00 AM3/14/99
to
In response to John Rickert who wrote:

"Suppose that AB = I, with A and B n x n matrices. Then A times the
kth column
of B yields the kth standard basis vector, and thus the columns of A
span R^n.
Since the smallest size of a spanning set of R^n is n, the columns of
A form a
basis. Now ABA = IA = A. Since the representation of a vector in
terms of a
basis is unique, it follows that the kth column of BA must be the kth
standard
basis vector. That is, BA = I. QED."

First let me say that this is a pseudo theorem. AB=I does not imply
BA=I. I think the confusion is that a typical way to find inverses
requires solving [A|I] and getting [I|B] and then B is the inverse of
A. In this last case, the B found is not JUST a matrix such that
AB=I; it was found using row reduction! This is crucial since row
reduction is reversible, and hence you can reverse the reduction to
show that A is the right inverse of B and therefore B is also the left
inverse of A and hence THE inverse. Hmm...I noticed that was a little
confusing, but read it carefully and it should make sense.

So your proof is flawed; for fun, try and spot the fatal flaw.

Good luck!

Chan-Ho

p.s.perhaps you first should try constructing small matrices A and B
s.t. AB=I but not the other way.

### John Creighton

Mar 14, 1999, 3:00:00 AM3/14/99
to

Chan-Ho Suh wrote:

> In response to John Rickert who wrote:
>
> "Suppose that AB = I, with A and B n x n matrices. Then A times the
> kth column
> of B yields the kth standard basis vector, and thus the columns of A
> span R^n.
> Since the smallest size of a spanning set of R^n is n, the columns of
> A form a
> basis. Now ABA = IA = A. Since the representation of a vector in
> terms of a
> basis is unique, it follows that the kth column of BA must be the kth
> standard
> basis vector. That is, BA = I. QED."
>
> First let me say that this is a pseudo theorem. AB=I does not imply
> BA=I.

Wrong. I haven't looked at the proof but the theorem is definitely
correct. BTW their is no such thing as the right inverse or left inverse.
The inverse is defined like if BA=I and AB=I then A is the inverse of B.
In another theorem it is shown that you only need one of these conditions
to show that A is the inverse of B.

### JRic395488

Mar 14, 1999, 3:00:00 AM3/14/99
to
>
>AB = I
>(AB)A = A
>A(BA) = A
>
>There is only one matrix X such that AX = A, and that is the identity matrix

Is this last statement really that obvious? I'd be curious to see that it
is. Thanks.

John Rickert

### ull...@math.okstate.edu

Mar 14, 1999, 3:00:00 AM3/14/99
to
In article <36EBB9F4...@ns.sympatico.ca>,

If we were talking about linear operators on possibly
infinite-dimensional vector spaces then we could certainly have
AB = I but not BA = 1, making B a right inverse of A but not
an inverse. Of course you're right that the stated theorem is
an actual theorem, I'm just conjecturing this may be what
misled the previous poster. (At a certain age I might have
"abstract" linear algebra but nothing about the differences
that arise in the finite-dimensional case.)

For example, consider some vector space of one-sided
sequences x = (x1, x2, ...), let A be the right shift and
B the left shift:

Ax = (x2, x3, x4, ...)
Bx = (0, x1, x2, ...)

David C. Ullrich

-----------== Posted via Deja News, The Discussion Network ==----------

### ull...@math.okstate.edu

Mar 14, 1999, 3:00:00 AM3/14/99
to
In article <4aemsl...@forum.swarthmore.edu>,
cs...@cornell.edu (Chan-Ho Suh) wrote:
[...]

> So your proof is flawed; for fun, try and spot the fatal flaw.
>
> Good luck!
>
> Chan-Ho
>
> p.s.perhaps you first should try constructing small matrices A and B
> s.t. AB=I but not the other way.

Good luck indeed. Perhaps _you_ should try constructing
that example? Let us know how it comes out.

### Oscar Lanzi III

Mar 14, 1999, 3:00:00 AM3/14/99
to
Only for finite matrices.

When you get to infinite matrices the proposition fails. Define

a_ij = 1 when j = i+1, 0 otherwise
b_ij = a_ji

Then AB = I but BA does not equal I. (BA)_(1,1) is 0 rather than 1.
The reason has to do with the fact that with infinite matrices, AB and
BA need not both have the zero eigenvalue.

--OL

### Christian Ohn

Mar 15, 1999, 3:00:00 AM3/15/99
to
JRic395488 <jric3...@aol.com> wrote:
: I don't claim that the proof offered is original, though I don't recall seeing

: it before. It's more conceptual (I think) than the crunchy
: systems-of-equations proofs found in some linear algebra books. It is not hard
: to see that the reasons invoked do not rely on what is to be shown.

: Suppose that AB = I, with A and B n x n matrices. Then A times the kth column

: of B yields the kth standard basis vector, and thus the columns of A span R^n.
: Since the smallest size of a spanning set of R^n is n, the columns of A form a
: basis. Now ABA = IA = A. Since the representation of a vector in terms of a
: basis is unique, it follows that the kth column of BA must be the kth standard
: basis vector. That is, BA = I. QED.

The reason your proof is more conceptual than equation crunching is that,
basically, it's a proof in terms of linear operators, that you have
expressed as matrices acting on column vectors. You can make it a bit
shorter, and get rid of coordinates altogether, as follows.

If A,B are linear operators such that AB=I, then B is injective (because
for any two vectors x,y, Bx=By implies ABx=ABy, so x=y), so it is also
surjective (here, we use the fact that we are in finite dimension).
Therefore, B is invertible. Since we already know that A is a left inverse
of B, it must also be a right inverse.

--
Christian Ohn
email: christ...@univ-reims.cat.fr.dog (remove the two animals)
Web: www.univ-reims.dog.fr.cat/Labos/SciencesExa/Mathematiques/ohn/ (ditto)

### Horst Kraemer

Mar 15, 1999, 3:00:00 AM3/15/99
to
On 14 Mar 1999 06:50:41 -0500, cs...@cornell.edu (Chan-Ho Suh) wrote:

[ AB=I -> BA=I for n-n-matrices ]

> So your proof is flawed; for fun, try and spot the fatal flaw.

> Good luck!

Are you sure that you can afford this kind of superiority ?

> Chan-Ho

> p.s.perhaps you first should try constructing small matrices A and B
> s.t. AB=I but not the other way.

I would recommend that _you_ try....

Regards
Horst

### bo...@rsa.com

Mar 15, 1999, 3:00:00 AM3/15/99
to
In article <19990313234634...@ng112.aol.com>,

jric3...@aol.com (JRic395488) wrote:
> I don't claim that the proof offered is original, though I don't recall seeing
> it before. It's more conceptual (I think) than the crunchy
> systems-of-equations proofs found in some linear algebra books.

But one doesn't need ANY of that

AB = I

multiply on the left by B

BAB = B

now multiply on the right by B^-1

BAB B^-1 = B B^-1

BA (I) = I

BA = I

### bo...@rsa.com

Mar 15, 1999, 3:00:00 AM3/15/99
to
In article <36eb8...@bn.ar.com.au>,

"Hope Hubris" <hopeh...@geocities.com> wrote:
>
> AB = I
> (AB)A = A
> A(BA) = A
>
> There is only one matrix X such that AX = A, and that is the identity matrix
> I. Therefore
>
> BA = I

Almost but not quite. You ASSERT that there is only one X such that AX = A.
This statement needs proof.

### David Kastrup

Mar 15, 1999, 3:00:00 AM3/15/99
to
bo...@rsa.com writes:

> In article <19990313234634...@ng112.aol.com>,
> jric3...@aol.com (JRic395488) wrote:
> > I don't claim that the proof offered is original, though I don't recall seeing
> > it before. It's more conceptual (I think) than the crunchy
> > systems-of-equations proofs found in some linear algebra books.
>
> But one doesn't need ANY of that
>
> AB = I
>
> multiply on the left by B
>
> BAB = B
>
> now multiply on the right by B^-1
>
> BAB B^-1 = B B^-1
>
> BA (I) = I
>
> BA = I

I don't have the original post on spool, but how do you know B^{-1}
exists?

Take A = (0 1), B = (0 1)^T, then AB is the I-Matrix of size 1, and BA
is a 2x2 matrix that is not an identity matrix.

--
David Kastrup Phone: +49-234-700-5570
Email: d...@neuroinformatik.ruhr-uni-bochum.de Fax: +49-234-709-4209
Institut für Neuroinformatik, Universitätsstr. 150, 44780 Bochum, Germany

### Richard Carr

Mar 15, 1999, 3:00:00 AM3/15/99
to
On Mon, 15 Mar 1999 bo...@rsa.com wrote:

:Date: Mon, 15 Mar 1999 14:03:06 GMT
:From: bo...@rsa.com
:Newsgroups: sci.math
:Subject: Re: Matrices: If AB = I, then BA = I.
:
:In article <19990313234634...@ng112.aol.com>,

: jric3...@aol.com (JRic395488) wrote:
:> I don't claim that the proof offered is original, though I don't recall seeing
:> it before. It's more conceptual (I think) than the crunchy
:> systems-of-equations proofs found in some linear algebra books.
:
:But one doesn't need ANY of that
:
:AB = I
:
:multiply on the left by B
:
:BAB = B
:
:now multiply on the right by B^-1
:
:BAB B^-1 = B B^-1
:
:BA (I) = I
:
:BA = I

:
:-----------== Posted via Deja News, The Discussion Network ==----------

:
:

I didn't see the original post as the server here deleted a ton of
messages here, so I may be way off, but perhaps one is not allowed to
assume that just because B has a left inverse then it has a right inverse.
In that case one can't just multiply on the right by B^-1.

### Ake Brannstrom

Mar 15, 1999, 3:00:00 AM3/15/99
to
On Mon, 15 Mar 1999 14:08:46 GMT, bo...@rsa.com wrote:
> "Hope Hubris" <hopeh...@geocities.com> wrote:
> >
> > AB = I
> > (AB)A = A
> > A(BA) = A
> >
> > There is only one matrix X such that AX = A, and that is the identity matrix
> > I. Therefore
> >
> > BA = I
>
> Almost but not quite. You ASSERT that there is only one X such that AX = A.
> This statement needs proof.

The statement is oviously false as stated, unless A is known to be invertible. An easier way to convince oneself that AB=I implies BA=I (for n x n matrices) is to observe
that det(A)*det(B)=1, thus det(A) isn't 0 so A is invertible and the result follows. I say "convince" because you may be proving things in circles - depending on your
linear algebra textbook.

Sincerely,
Ake.

### ull...@math.okstate.edu

Mar 15, 1999, 3:00:00 AM3/15/99
to
In article <7cj3uj\$pah\$1...@nnrp1.dejanews.com>,

bo...@rsa.com wrote:
> In article <19990313234634...@ng112.aol.com>,
> jric3...@aol.com (JRic395488) wrote:
> > I don't claim that the proof offered is original, though I don't recall
seeing
> > it before. It's more conceptual (I think) than the crunchy
> > systems-of-equations proofs found in some linear algebra books.
>
> But one doesn't need ANY of that
>
> AB = I
>
> multiply on the left by B
>
> BAB = B
>
> now multiply on the right by B^-1
>
> BAB B^-1 = B B^-1
>
> BA (I) = I
>
> BA = I

Assuming that there is such a thing as B^(-1), meaning
a two-sided inverse, yes. How do you know there is? We were
given only that A and B are nxn matrices with AB = I. Why
does it follow that B is invertible?
That's more or less assuming what we're to prove.
A proof using nothing but fiddling at this level can't
be right, or it would prove the same thing for operators
on vector spaces in general.

### ull...@math.okstate.edu

Mar 15, 1999, 3:00:00 AM3/15/99
to
In article <m27lsi7...@mailhost.neuroinformatik.ruhr-uni-bochum.de>,
David Kastrup <d...@mailhost.neuroinformatik.ruhr-uni-bochum.de> wrote:

> bo...@rsa.com writes:
>
> > In article <19990313234634...@ng112.aol.com>,
> > jric3...@aol.com (JRic395488) wrote:
> > > I don't claim that the proof offered is original, though I don't recall
seeing
> > > it before. It's more conceptual (I think) than the crunchy
> > > systems-of-equations proofs found in some linear algebra books.
> >
> > But one doesn't need ANY of that
> >
> > AB = I
> >
> > multiply on the left by B
> >
> > BAB = B
> >
> > now multiply on the right by B^-1
> >
> > BAB B^-1 = B B^-1
> >
> > BA (I) = I
> >
> > BA = I
>
> I don't have the original post on spool, but how do you know B^{-1}
> exists?
>
> Take A = (0 1), B = (0 1)^T, then AB is the I-Matrix of size 1, and BA
> is a 2x2 matrix that is not an identity matrix.

In the mysterious original post we were given that A and B were
nxn matrices.

### ull...@math.okstate.edu

Mar 15, 1999, 3:00:00 AM3/15/99
to
In article <7cidms\$s4i\$1...@mach.vub.ac.be>,
Christian Ohn <wi...@st-sulpice.org> wrote:

> JRic395488 <jric3...@aol.com> wrote:
> : I don't claim that the proof offered is original, though I don't recall
seeing
> : it before. It's more conceptual (I think) than the crunchy
> : systems-of-equations proofs found in some linear algebra books. It is not
hard
> : to see that the reasons invoked do not rely on what is to be shown.
>
> : Suppose that AB = I, with A and B n x n matrices. Then A times the kth
column
> : of B yields the kth standard basis vector, and thus the columns of A span
R^n.
> : Since the smallest size of a spanning set of R^n is n, the columns of A form
a
> : basis. Now ABA = IA = A. Since the representation of a vector in terms of
a
> : basis is unique, it follows that the kth column of BA must be the kth
standard
> : basis vector. That is, BA = I. QED.
>
> The reason your proof is more conceptual than equation crunching is that,
> basically, it's a proof in terms of linear operators, that you have
> expressed as matrices acting on column vectors. You can make it a bit
> shorter, and get rid of coordinates altogether, as follows.
>
> If A,B are linear operators such that AB=I, then B is injective (because
> for any two vectors x,y, Bx=By implies ABx=ABy, so x=y), so it is also
> surjective (here, we use the fact that we are in finite dimension).
> Therefore, B is invertible. Since we already know that A is a left inverse
> of B, it must also be a right inverse.

The fact that B injective implies that B is surjective is exactly
the point. Now how does one prove _that_ "conceptually", assuming
that the traditional equation crunching is not allowed at _any_ point?

> Christian Ohn
> email: christ...@univ-reims.cat.fr.dog (remove the two animals)
> Web: www.univ-reims.dog.fr.cat/Labos/SciencesExa/Mathematiques/ohn/ (ditto)
>

-----------== Posted via Deja News, The Discussion Network ==----------

### Jim Ferry

Mar 15, 1999, 3:00:00 AM3/15/99
to
JRic395488 wrote:
>
> I don't claim that the proof offered is original, though I don't recall seeing
> it before. It's more conceptual (I think) than the crunchy
> systems-of-equations proofs found in some linear algebra books. It is not hard
> to see that the reasons invoked do not rely on what is to be shown.
>
> Suppose that AB = I, with A and B n x n matrices. Then A times the kth column
> of B yields the kth standard basis vector, and thus the columns of A span R^n.
> Since the smallest size of a spanning set of R^n is n, the columns of A form a
> basis. Now ABA = IA = A. Since the representation of a vector in terms of a
> basis is unique, it follows that the kth column of BA must be the kth standard
> basis vector. That is, BA = I. QED.

The above is the original post. Some people noted they couldn't
read it. We see that the point is to prove that right inverses
are left inverses.

(Note: showing existence is harder than showing uniqueness: if
AB = I and BC = I, then (AB)C = A(BC), so C = A.)

As for the statement AB = I => BA = I: we know that it holds
when A and B are square. To show it only holds then, suppose
A is an m x n, and B an n x m, matrix. Rank(A) <= min(m,n).
Recall the inequalities

rank(AB) <= rank(A) and rank(AB) <= rank(B).

If AB = I and BA = I, then rank(A) >= max(m,n), so rank(A) =
m = n.

| Jim Ferry | Center for Simulation |
| http://www.uiuc.edu/ph/www/jferry/ +------------------------+
| jferry@expunge_this_field.uiuc.edu | University of Illinois |

### Lee Rudolph

Mar 15, 1999, 3:00:00 AM3/15/99
to
ull...@math.okstate.edu writes:

> Assuming that there is such a thing as B^(-1), meaning
>a two-sided inverse, yes. How do you know there is? We were
>given only that A and B are nxn matrices with AB = I. Why
>does it follow that B is invertible?
> That's more or less assuming what we're to prove.
>A proof using nothing but fiddling at this level can't
>be right, or it would prove the same thing for operators
>on vector spaces in general.

Note that P.M. Cohn gives an example of matrices (over,
I admit, a *really* *nasty* ring; can't happen with a
field), one 2-by-3, the other 3-by-2, such that both
their products are identity matrices of the appropriate
size.

Gotta use *some* properties of *something* in the proof.

Lee Rudolph

### Robert J. Kolker

Mar 15, 1999, 3:00:00 AM3/15/99
to

JRic395488 wrote:

> I don't claim that the proof offered is original, though I don't recall seeing
> it before. It's more conceptual (I think) than the crunchy
> systems-of-equations proofs found in some linear algebra books. It is not hard
> to see that the reasons invoked do not rely on what is to be shown.
>

All you need is that non-singular matrices of order nxn form a group
under matrix multiplication.

Now in any group if

a*b = I (identity) then
b^a^ = I (where a^ is inverse of a)
multpliy on the left by b and on the right by a and get
I = ba. Thats all folks

Bob Kolker

### JRic395488

Mar 16, 1999, 3:00:00 AM3/16/99
to
>But one doesn't need ANY of that
>
>AB = I
>
>multiply on the left by B
>
>BAB = B
>
>now multiply on the right by B^-1
>

OK. How does one know that B inverse exists? Or more exactly, that there
is a matrix C such that BC = I?

Thanks,
John Rickert

### Doug Norris

Mar 16, 1999, 3:00:00 AM3/16/99
to
jric3...@aol.com (JRic395488) writes:

There doesn't have to be. The above proof only works in that case, though.

Doug

----------------------------------------------------------------------------
----------------------------------------------------------------------------
"Maybe in order to understand mankind, we have to look at the word itself.
Mankind. Basically, it's made up of two separate words---"mank" and "ind".
What do these words mean? It's a mystery, and that's why so is mankind."
- Deep Thought, Jack Handey

### Paul Hammond

Mar 16, 1999, 3:00:00 AM3/16/99
to
On Mon, 15 Mar 1999 bo...@rsa.com wrote:

> In article <36eb8...@bn.ar.com.au>,

> "Hope Hubris" <hopeh...@geocities.com> wrote:
> >
> > AB = I
> > (AB)A = A
> > A(BA) = A
> >
> > There is only one matrix X such that AX = A, and that is the identity matrix
> > I. Therefore
> >
> > BA = I
>
> Almost but not quite. You ASSERT that there is only one X such that AX = A.
> This statement needs proof.
>

And indeed it is not true. Suppose A is idempotent, so that A^2=A.
Then, A and I are two matrices X s.t. AX=A.

----
p...@maths.nott.ac.uk
Paul Hammond | "Only teenagers could be
Maths Dept | that incoherent"
University of Nottingham |
Nottingham NG7 2RD | - Lisa Simpson

### Mike ROBSON

Mar 16, 1999, 3:00:00 AM3/16/99
to
Paul Hammond <p...@maths.nott.ac.uk> writes:

This is only true IF A and I are two distinct matrices.
However we have:
A = A I I is the identity
= A AB since AB=I
= AA B associativity
= A B idempotent
= I assumption.

--
J.M. Robson
LaBRI, Universite Bordeaux 1

### bo...@rsa.com

Mar 16, 1999, 3:00:00 AM3/16/99
to
> > jric3...@aol.com (JRic395488) wrote:
> > > I don't claim that the proof offered is original, though I don't recall
seeing
> > > it before. It's more conceptual (I think) than the crunchy
> > > systems-of-equations proofs found in some linear algebra books.
> >
> > But one doesn't need ANY of that
> >
> > AB = I
> >
> > multiply on the left by B
> >
> > BAB = B
> >
> > now multiply on the right by B^-1
> >
> > BAB B^-1 = B B^-1
> >
> > BA (I) = I
> >
> > BA = I
>
> I don't have the original post on spool, but how do you know B^{-1}
> exists?

I take the problem to mean that the left inverse and right inverse
of a matrix are identical. I believe it is implicit in the problem that I is
the same size as A and B. If I am wrong, please let me know.

If AB = I , I is n x n and A is n x m and B is m x n, then
AB and BA are different sized matrices. Thus, the Identity matrix
in the two cases AB and BA would be different sizes.

I take the problem to mean that I is the same in both cases. This forces
A and B to be square.

>
>
> Take A = (0 1), B = (0 1)^T, then AB is the I-Matrix of size 1, and BA
> is a 2x2 matrix that is not an identity matrix.

Yes.

### ull...@math.okstate.edu

Mar 16, 1999, 3:00:00 AM3/16/99
to
In article <7cjrhn\$dss\$1...@panix3.panix.com>,

Huh, I didn't know that. (For anyone who doesn't see
why this doesn't contradict what everyone else has been saying,
we've been taking "matrix" to mean "real or maybe complex matrix",
possibly "matrix over a field".)

Surely it doesn't have to be that nasty a ring? Surely
some rings even I know would do this? (Now I have today's
excuse for not getting any work, um, I mean today's exercise.)

> Gotta use *some* properties of *something* in the proof.

I do and you do. But that's just us - some people don't
even need definitions of things to prove things about them.

> Lee Rudolph

### Christian Ohn

Mar 16, 1999, 3:00:00 AM3/16/99
to
ull...@math.okstate.edu wrote:
: The fact that B injective implies that B is surjective is exactly

: the point. Now how does one prove _that_ "conceptually", assuming
: that the traditional equation crunching is not allowed at _any_ point?

Injectivity of B implies that if e_1,...,e_n is linearly independent, then
so is Be_1,...,Be_n. (The proof of that is obvious and only uses
injectivity and the definition of linear independence.) So B sends the
whole space to a subspace of dimension n, i.e., again the whole space.

(Of course, you can't get rid of bases completely: the result being false
in infinite dimension, you have to use finite dimension somewhere.)

--

### John Harper

Mar 17, 1999, 3:00:00 AM3/17/99
to
In article <7cn3ks\$m...@boole.maths.tcd.ie>,
Timothy Murphy <t...@maths.tcd.ie> wrote:
>I haven't followed this thread carefully,

Neither have I; I hope someone insisted that A and B be square.
If A is p*q and B is q*p, p < q, and AB = I_p, BA cannot be I_q.
(Consider rank.)

John Harper, School of Mathematical and Computing Sciences,
Victoria University, Wellington, New Zealand
e-mail john....@vuw.ac.nz phone (+64)(4)471 5341 fax (+64)(4)495 5045

### D. J. Bernstein

Mar 17, 1999, 3:00:00 AM3/17/99
to
> The fact that B injective implies that B is surjective is exactly
> the point. Now how does one prove _that_ "conceptually", assuming
> that the traditional equation crunching is not allowed at _any_ point?

In general, if h is an injective endomorphism of a finite-length module
M, then len(M/Im h) = len M - len Im h = len Ker h = 0, so M = Im h.

Here len M means the supremum of all lengths of finite chains of
submodules of M. It's easy to prove the exactness of len:

* len M >= len Ker h + len Im h: If D is a length-m chain of
submodules of Ker h, and E is a length-n chain of submodules of
Im h, then D union {h^{-1}(S): S in E} is a chain of submodules of
M of length at least m+n.

* len M <= len Ker h + len Im h: If S_0 < S_1 < ... < S_n in M then
define D = {S_j intersect Ker h} and E = {h(S_j)}. Then D is a
chain of submodules of Ker h, and E is a chain of submodules of
Im h, and the sum of the lengths of D and E is at least n.

This generalizes the usual statements about bases.

You might instead have asked why a surjective endomorphism h of a
finitely generated module M over a commutative ring R has to be
injective. M is an R[h]-module with M = hM, so by Nakayama's lemma
(1-gh)M = 0 for some g in R[h], so if h(m) = 0 then m = g(h(m)) = 0.
This generalizes the usual statements about minimal polynomials.

---Dan

### David Kastrup

Mar 18, 1999, 3:00:00 AM3/18/99
to
jric3...@aol.com (JRic395488) writes:

> >
> >AB = I
> >(AB)A = A
> >A(BA) = A
> >
> >There is only one matrix X such that AX = A, and that is the identity matrix
>

> Is this last statement really that obvious? I'd be curious to
> see that it is. Thanks.

Take for example A = (0,0; 0,0). It is obvious that the only X for
which AX=A is X = (42, 13; -pi, e) and that is the identity matrix.

Please excuse the shorthand for 2x2 matrices.

### JRic395488

Mar 18, 1999, 3:00:00 AM3/18/99
to
Hello --

Thanks to many for the interesting responses. I guess I should have mentioned
that one reason I like the proof I originally offered is that it is quite
accessible to linear algebra students at a sophomore level in college and uses
facts they should be very well aware of. Obviously, there are many different
(or what appear to be different) approaches to a basic result such as this one.
Readers will have to decide for themselves which manners of proof they find
the most and natural, but then again,a more abstract approach can be
enlightening on a different level.

This is just a long-winded way to say that there are many ways to get there!

Regards,
John Rickert

### ull...@math.okstate.edu

Mar 18, 1999, 3:00:00 AM3/18/99
to
In article <7cmkhn\$q2q\$1...@mach.vub.ac.be>,
Christian Ohn <wi...@st-sulpice.org> wrote:
> ull...@math.okstate.edu wrote:
> : The fact that B injective implies that B is surjective is exactly

> : the point. Now how does one prove _that_ "conceptually", assuming
> : that the traditional equation crunching is not allowed at _any_ point?
>
> Injectivity of B implies that if e_1,...,e_n is linearly independent, then
> so is Be_1,...,Be_n. (The proof of that is obvious and only uses
> injectivity and the definition of linear independence.)

Yes.

> So B sends the
> whole space to a subspace of dimension n, i.e., again the whole space.

"Dimension"? What's that? Oh, the number of vectors in a basis.
How do you know that any two bases have the same number of elements?

Once you know that any two bases for a vector space have the same
number of elements then it's easy to see that in the finite-dimensional
case the dimension of a proper subspace of a vector space is strictly smaller
than the dimension of the vector space, and the above is all fine.
But how do you show that any two bases have the same number of
elements? I could be wrong but it seems to me that this is obviously
equivalent to what we're trying to prove!

Let's see, you show how "any two bases have the same number
of elements" implies that a left inverse must be a right inverse.
Suppose that left inverses are right inverses, and say B and
B' are bases for a finite-dimensional space V. Say B has n vectors
and B' has m vectors. Say n >= m. Now define T from V to V by

T(a1*b1 + ... + an*bn) = a1*b1' + ... + am*bm' ,

and define

S(a1*b1' + ... + am*bm') = a1*b1 + ... + am*bm .

Then ST is the identity. Hence TS is the identity, and
hence n = m.

> (Of course, you can't get rid of bases completely: the result being false
> in infinite dimension, you have to use finite dimension somewhere.)

Right. But if we're giving a proof that's trying to avoid the
calculations in the traditional argument in linear algebra books then
we don't want to use facts that are _equivalent_ to what we're trying
to prove (because then the traditional proofs of the facts we're
using will involve the calculations we're trying to avoid, so we
haven't really avoided them, thus missing the point. Not that I
this by gaussian elimination, etc. But _if_ you're offering
the above as an obvious alternative to the post that started
all this then you really do need to show why any two bases have
the same dimsension without using the "equation-crunching".
Proving something by assuming we've already proved it is a
little too easy.)

### ull...@math.okstate.edu

Mar 18, 1999, 3:00:00 AM3/18/99
to
In article <1999Mar1723...@koobera.math.uic.edu>,

d...@koobera.math.uic.edu (D. J. Bernstein) wrote:
> > The fact that B injective implies that B is surjective is exactly
> > the point. Now how does one prove _that_ "conceptually", assuming
> > that the traditional equation crunching is not allowed at _any_ point?
>
> In general, if h is an injective endomorphism of a finite-length module
> M, then len(M/Im h) = len M - len Im h = len Ker h = 0, so M = Im h.
>
> Here len M means the supremum of all lengths of finite chains of
> submodules of M. It's easy to prove the exactness of len:
>
> * len M >= len Ker h + len Im h: If D is a length-m chain of
> submodules of Ker h, and E is a length-n chain of submodules of
> Im h, then D union {h^{-1}(S): S in E} is a chain of submodules of
> M of length at least m+n.
>
> * len M <= len Ker h + len Im h: If S_0 < S_1 < ... < S_n in M then
> define D = {S_j intersect Ker h} and E = {h(S_j)}. Then D is a
> chain of submodules of Ker h, and E is a chain of submodules of
> Im h, and the sum of the lengths of D and E is at least n.
>
> This generalizes the usual statements about bases.
>
> You might instead have asked why a surjective endomorphism h of a
> finitely generated module M over a commutative ring R has to be
> injective.

I'll have to translate the rest of this into language I
understand before deciding whether I really believe it does
what you say it does, or whether there's really another
hidden assumption that happens to be equivalent to the
orginal question, as in most of the solutions posted.
But I can say immediately that this last sentence is
incorrect: I might have asked why a surjective endomorphism h of a

finitely generated module M over a commutative ring R has to be

injective? No, there's essentially no chance I might have

> M is an R[h]-module with M = hM, so by Nakayama's lemma
> (1-gh)M = 0 for some g in R[h], so if h(m) = 0 then m = g(h(m)) = 0.
> This generalizes the usual statements about minimal polynomials.
>
> ---Dan
>

-----------== Posted via Deja News, The Discussion Network ==----------

### Christian Ohn

Mar 18, 1999, 3:00:00 AM3/18/99
to
ull...@math.okstate.edu wrote:
: "Dimension"? What's that? Oh, the number of vectors in a basis.

: How do you know that any two bases have the same number of elements?

Assume you've got two bases; one with m elements, one with n elements.
Express each vector of the first as a linear combination of the second,
and vice versa. The coefficients of these linear combinations lead to two
matrices C (m x n) and D (n x m) such that CD=I_m and DC=I_n (I trust you
can fill in the details.) Taking traces on both sides, we get m=n.

: [snip] [snip] ... [snip] Proving something by assuming we've already

: proved it is a little too easy.

Thanks for that nice morality lesson, but as you can see above, that's not
what I did.

### Zdislav V. Kovarik

Mar 18, 1999, 3:00:00 AM3/18/99
to
In article <7crhpa\$2b1\$1...@nnrp1.dejanews.com>,

<ull...@math.okstate.edu> wrote:
>In article <7cmkhn\$q2q\$1...@mach.vub.ac.be>,
> Christian Ohn <wi...@st-sulpice.org> wrote:
[...]

>> So B sends the
>> whole space to a subspace of dimension n, i.e., again the whole space.
>
> "Dimension"? What's that? Oh, the number of vectors in a basis.
>How do you know that any two bases have the same number of elements?
>

How about starting with a careful definition of dimension?
A vector space X is finite dimensional iff it has a finite spanning list
of vectors.
The smallest length of a spanning list for X is called the dimension
dim(X) of X. (Non-negative integers are well-ordered.)

A spanning list of length dim(X) is easy seen to be a basis, etc.

Cheers, ZVK(Slavek).

### D. J. Bernstein

Mar 19, 1999, 3:00:00 AM3/19/99
to
> Surely it doesn't have to be that nasty a ring?

Nasty but elementary: Endomorphisms of the group of positive rationals.

Define four endomorphisms of that group G by their effect on the primes:

r takes 2 to 2, 3 to 5, 5 to 11, etc.;
s takes 2 to 3, 3 to 7, 5 to 13, etc.;
u takes 2 to 2, 3 to 1, 5 to 3, 7 to 1, 11 to 5, 13 to 1, etc.;
v takes 2 to 1, 3 to 2, 5 to 1, 7 to 3, 11 to 1, 13 to 5, etc.

Then (u,v) is an isomorphism from G to GxG, with inverse (r s). In other
words, over End G, (r s)(u,v) = (1) and (u,v)(r s) = (1 0,0 1).

Several books credit Eilenberg with the observation that rank doesn't
make sense over End G if G=GxG, but I don't know the original reference.

---Dan

Mar 19, 1999, 3:00:00 AM3/19/99
to
In article <19990314133529...@ng-fa1.aol.com>,
jric3...@aol.com (JRic395488) wrote:

> >
> >AB = I
> >(AB)A = A
> >A(BA) = A
> >
> >There is only one matrix X such that AX = A, and that is the identity matrix
>
> Is this last statement really that obvious? I'd be curious to see that it
> is. Thanks.

Actually, it is false if you don't assume that there is a matrix B such that
AB=I. Your original proof fills that hole.

--
Department of Mathematics
Luleå University of Technology
Sweden

Mar 19, 1999, 3:00:00 AM3/19/99
to
In article <7cj3uj\$pah\$1...@nnrp1.dejanews.com>, bo...@rsa.com wrote:

> In article <19990313234634...@ng112.aol.com>,
> jric3...@aol.com (JRic395488) wrote:
> > I don't claim that the proof offered is original, though I don't
recall seeing
> > it before. It's more conceptual (I think) than the crunchy
> > systems-of-equations proofs found in some linear algebra books.
>
> But one doesn't need ANY of that
>
> AB = I
>
> multiply on the left by B
>
> BAB = B
>
> now multiply on the right by B^-1

And how do you know that B has a (right) inverse?

Mar 19, 1999, 3:00:00 AM3/19/99
to
In article <36EDBDC7...@www2.email.com>, "Robert J. Kolker"
<bobk...@www2.email.com> wrote:

> All you need is that non-singular matrices of order nxn form a group
> under matrix multiplication.

Excuse me, but isn't "non-singular matrix" by defintion equivalent to
"invertible matrix"?

Mar 19, 1999, 3:00:00 AM3/19/99
to
In article <7cm1si\$dlg\$1...@nnrp1.dejanews.com>, ull...@math.okstate.edu wrote:

> In article <7cjrhn\$dss\$1...@panix3.panix.com>,
> lrud...@panix.com (Lee Rudolph) wrote:
> > ull...@math.okstate.edu writes:
> >
> > > Assuming that there is such a thing as B^(-1), meaning
> > >a two-sided inverse, yes. How do you know there is? We were
> > >given only that A and B are nxn matrices with AB = I. Why
> > >does it follow that B is invertible?
> > > That's more or less assuming what we're to prove.
> > >A proof using nothing but fiddling at this level can't
> > >be right, or it would prove the same thing for operators
> > >on vector spaces in general.
> >
> > Note that P.M. Cohn gives an example of matrices (over,
> > I admit, a *really* *nasty* ring; can't happen with a
> > field), one 2-by-3, the other 3-by-2, such that both
> > their products are identity matrices of the appropriate
> > size.
>
> Huh, I didn't know that. (For anyone who doesn't see
> why this doesn't contradict what everyone else has been saying,
> we've been taking "matrix" to mean "real or maybe complex matrix",
> possibly "matrix over a field".)
>

> Surely it doesn't have to be that nasty a ring?

What if we take the trivial ring {0}?

### ull...@math.okstate.edu

Mar 19, 1999, 3:00:00 AM3/19/99
to
In article <1999Mar1723...@koobera.math.uic.edu>,
d...@koobera.math.uic.edu (D. J. Bernstein) wrote:
> > The fact that B injective implies that B is surjective is exactly
> > the point. Now how does one prove _that_ "conceptually", assuming
> > that the traditional equation crunching is not allowed at _any_ point?
>
> In general, if h is an injective endomorphism of a finite-length module
> M, then len(M/Im h) = len M - len Im h = len Ker h = 0, so M = Im h.
>
> Here len M means the supremum of all lengths of finite chains of
> submodules of M. It's easy to prove the exactness of len:
>
> * len M >= len Ker h + len Im h: If D is a length-m chain of
> submodules of Ker h, and E is a length-n chain of submodules of
> Im h, then D union {h^{-1}(S): S in E} is a chain of submodules of
> M of length at least m+n.
>
> * len M <= len Ker h + len Im h: If S_0 < S_1 < ... < S_n in M then
> define D = {S_j intersect Ker h} and E = {h(S_j)}. Then D is a
> chain of submodules of Ker h, and E is a chain of submodules of
> Im h, and the sum of the lengths of D and E is at least n.

Good answer - of course you knew that.

Some people seem to have got the idea from my repeated
"ok, now how do you do _that_?" that I thought it couldn't
be done or didn't know where to look it up or something.
In fact I was just pointing out that various solutions
posted either assumed what was to be proved or something
easily seen to be equivalent to it, hence didn't really avoid
whatever it was we were trying to avoid. But the above is
actually a lot nicer than the version I knew where to look
up.

In case I'm not the only one who was stuck for a
second on the "the sum of the lengths of D and E is at least n",
the point is that for each j at least one of the two inclusions

S_j intersect Ker h <= S_(j+1) intersect Ker h
h(S_j) <= h(S_(j+1))

must be proper; if you assume they're both equality then
it's easy to see that S_j = S_(j+1) . Keen.

> This generalizes the usual statements about bases.

Indeed. I suspect it's also "really the same" as
the traditional proof in all those linear algebra books.

(I suppose there's no word for "maximum cardinality of
an independent set", since it turns out to be equivalent
to "cardinality of a maximal independent set". We use the
first notion as the definition of "dimension", then we
go through the above with this notion of dimension, then
the above shows that left inverses are right inverses,
hence any two maximal independent sets have the same
cardinality, so finally this definition of "dim" is the
same as the usual one.)

> You might instead have asked why a surjective endomorphism h of a
> finitely generated module M over a commutative ring R has to be

> injective. M is an R[h]-module with M = hM, so by Nakayama's lemma

### ull...@math.okstate.edu

Mar 19, 1999, 3:00:00 AM3/19/99
to
In article <19990317215528...@ng-cd1.aol.com>,

jric3...@aol.com (JRic395488) wrote:
> Hello --
>
> Thanks to many for the interesting responses. I guess I should have mentioned
> that one reason I like the proof I originally offered is that it is quite
> accessible to linear algebra students at a sophomore level in college and uses
> facts they should be very well aware of. Obviously, there are many different
> (or what appear to be different) approaches to a basic result such as this
one.
> Readers will have to decide for themselves which manners of proof they find
> the most and natural, but then again,a more abstract approach can be
> enlightening on a different level.

What I think is enlightening about what Dan posted is first
going through and translating it to an argument about bases for
vector spaces, then trying to see how it's really equivalent to
the argument you said you were trying to avoid. (Not sure about
this last, luckily it's vague enough it can't quite be false.)

> This is just a long-winded way to say that there are many ways to get there!
>
> Regards,
> John Rickert
>
>

-----------== Posted via Deja News, The Discussion Network ==----------

### ull...@math.okstate.edu

Mar 19, 1999, 3:00:00 AM3/19/99
to
In article <7cs0u8\$mlt\$1...@mach.vub.ac.be>,
Christian Ohn <wi...@st-sulpice.org> wrote:
> ull...@math.okstate.edu wrote:
> : "Dimension"? What's that? Oh, the number of vectors in a basis.

> : How do you know that any two bases have the same number of elements?
>
> Assume you've got two bases; one with m elements, one with n elements.
> Express each vector of the first as a linear combination of the second,
> and vice versa. The coefficients of these linear combinations lead to two
> matrices C (m x n) and D (n x m) such that CD=I_m and DC=I_n (I trust you
> can fill in the details.) Taking traces on both sides, we get m=n.

Believe it or not I never realized that AB and BA must have the same
trace (when A is m x n and B is n x m). Yes, it's trivial to verify.

> : [snip] [snip] ... [snip] Proving something by assuming we've already
> : proved it is a little too easy.
>
> Thanks for that nice morality lesson, but as you can see above, that's not
> what I did.

Sorry. It's what _several_ people have done here, and actually
I didn't realize that showing that two bases had the same number
of elements was so easy, I thought it was equivalent to all these
other things. (Um, it is. All these things are much simpler than
I realized 10 minutes ago. Huh.)

> --
> Christian Ohn
> email: christ...@univ-reims.cat.fr.dog (remove the two animals)
> Web: www.univ-reims.dog.fr.cat/Labos/SciencesExa/Mathematiques/ohn/ (ditto)
>

-----------== Posted via Deja News, The Discussion Network ==----------

### ull...@math.okstate.edu

Mar 19, 1999, 3:00:00 AM3/19/99
to
In article <7csc7t\$n...@mcmail.cis.McMaster.CA>,
kov...@mcmail.cis.McMaster.CA (Zdislav V. Kovarik) wrote:
> In article <7crhpa\$2b1\$1...@nnrp1.dejanews.com>,
> <ull...@math.okstate.edu> wrote:
> >In article <7cmkhn\$q2q\$1...@mach.vub.ac.be>,

> > Christian Ohn <wi...@st-sulpice.org> wrote:
> [...]
> >> So B sends the
> >> whole space to a subspace of dimension n, i.e., again the whole space.
> >
> > "Dimension"? What's that? Oh, the number of vectors in a basis.
> >How do you know that any two bases have the same number of elements?
> >
>
> How about starting with a careful definition of dimension?

Well, there's nothing "careless" about the definition I've
always used: cardinality of a basis. Assuming that you _first_
show that any two bases have the same cardinality - using this
definition before doing that would be careless.
Yes, this definition works much better. (Similarly the
definition "supremum of the cardinality of an independent
set" also works better.)

> A vector space X is finite dimensional iff it has a finite spanning list
> of vectors.
> The smallest length of a spanning list for X is called the dimension
> dim(X) of X. (Non-negative integers are well-ordered.)
>
> A spanning list of length dim(X) is easy seen to be a basis, etc.

Let's see, with this notion of dimension it's clear that the
dimension of T(X) cannot exceed the dimension of X if T is linear,
and the fact that sets spanning subspaces can be extended to
spanning sets makes it clear that the dimension of a proper subspace
of X is strictly smaller than dim(X)... oh, we also need to know that
if T is injective then dim(T(X)) = dim(X), and yes this is clear.

What the heck is all that stuff in those linear algebra books for?
This is _much_ simpler than all that.

> Cheers, ZVK(Slavek).

### Charles H. Giffen

Mar 19, 1999, 3:00:00 AM3/19/99
to ull...@math.okstate.edu
ull...@math.okstate.edu wrote:
>
> In article <7cjrhn\$dss\$1...@panix3.panix.com>,
> lrud...@panix.com (Lee Rudolph) wrote:
> > ull...@math.okstate.edu writes:
> >
> > > Assuming that there is such a thing as B^(-1), meaning
> > >a two-sided inverse, yes. How do you know there is? We were
> > >given only that A and B are nxn matrices with AB = I. Why
> > >does it follow that B is invertible?
> > > That's more or less assuming what we're to prove.
> > >A proof using nothing but fiddling at this level can't
> > >be right, or it would prove the same thing for operators
> > >on vector spaces in general.
> >
> > Note that P.M. Cohn gives an example of matrices (over,
> > I admit, a *really* *nasty* ring; can't happen with a
> > field), one 2-by-3, the other 3-by-2, such that both
> > their products are identity matrices of the appropriate
> > size.
>
> Huh, I didn't know that. (For anyone who doesn't see
> why this doesn't contradict what everyone else has been saying,
> we've been taking "matrix" to mean "real or maybe complex matrix",
> possibly "matrix over a field".)
>
> Surely it doesn't have to be that nasty a ring? Surely
> some rings even I know would do this? (Now I have today's
> excuse for not getting any work, um, I mean today's exercise.)
>
> > Gotta use *some* properties of *something* in the proof.
>
> I do and you do. But that's just us - some people don't
> even need definitions of things to prove things about them.
>
> > Lee Rudolph
> >
>
> -----------== Posted via Deja News, The Discussion Network ==----------

Here are a couple of "nasty" rings for your consideration
(actually they are pretty interesting to an algebraic
K-theorist):

(1) Let L be the free ring Z{s,t} on two indeterminates
s and t modulo the ideal generated by (1 - st). This
ring has the interesting property that there is an invertible
upper triangular matrix M over L whose inverse is a lower
triangular matrix, viz.

| t 1-ts | | s 0 |
M = | | M^{-1} = | |
| 0 s | | 1-ts t |

(2) Let B be the free ring Z{t,t^{-1},x,y} on an invertible
generator t and two more free generators x and y modulo
the ideal generated by

1 - xy, ytx, yxt + tyx - t, xt^2 - tx, t^2 y - yt

Let

| x |
X = | | Y = | y ty |
| xt^{-1} |

Then XY = I_2 in M_2(B) and YX = I_1 in M_1(B), where
I_n denotes the identity in the ring M_n(B) of n x n
matrices over B. Thus a 2 x 1 matrix can have a 1 x 2
matrix two-sided inverse.

The ring B also has the magical property of shifting algebraic
K-theory by one dimension, ie.

K_i(R) \iso K_{i+1}( B(R) ),

where B(R) = B \tensor R.

--Chuck Giffen

### ull...@math.okstate.edu

Mar 20, 1999, 3:00:00 AM3/20/99
to
In article <7cu5ni\$e2r\$1...@nnrp1.dejanews.com>,

PS. I lied. It's not clear to me why it's obvious that the
dimension of a proper subspace of X must be strictly smaller than
dim(X). Of course I suspect I could prove that, I'm curious whether
it's "immediately" obvious for some reason I don't see.
The point being that in each of the valid solutions we've
seen here there's a lot of stuff that I believe I coulda figured
out, and one step that seems like the clever bit that makes it
work. Not that you said that giving the right definition of
dimension leads immediately to a proof, but I'm curious whether
it does - if it _is_ obvious that this dim is strictly monotone
then this is a solution where _all_ of it strikes me as "clear";
or it could be that it's right here where the clever bit is
hiding.

### Robert J. Kolker

Mar 21, 1999, 3:00:00 AM3/21/99
to

> In article <36EDBDC7...@www2.email.com>, "Robert J. Kolker"
> <bobk...@www2.email.com> wrote:
>
> > All you need is that non-singular matrices of order nxn form a group
> > under matrix multiplication.
>
> Excuse me, but isn't "non-singular matrix" by defintion equivalent to
> "invertible matrix"?

Correct. A non singular matrix is a matrix that has an inverse.

B.K.

### Bill Dubuque

Apr 24, 1999, 3:00:00 AM4/24/99
to
The Hilbert Hotel is a classical popular realization of the first
infinite ordinal w. It's a nice example of the dichotomy between
the finite and the infinite: the right-shift operator n -> n+1
is 1-1 but not onto the infinite set w; contrast this with a
finite set, where a function is 1-1 if and only if it is onto
(the pigeonhole principle). The exact same contrast occurs between
finite and infinite dimensional vectors spaces and can again be
illustrated via shift operators, e.g. if V has infinite dimension

let R = (Right) shift operator: R(a b c d ...) = (0 a b c ...)
and L = (Left) shift operator: L(a b c d ...) = (b c d e ...)
so LR = I but RL != I since: RL(a b c d ...) = (0 b c d ...)

This has a simple model in a vector space V of polynomials. Let V
have basis {1, x, x^2 ...}, so (a b c ...) = a + b x + c x^2 + ...
with shift operators R p(x) = x p(x), L p(x) = 1/x (p(x)-p(0))
so that LR = I but RL != I since RL p(x) = p(x)-p(0). Note

L is onto but not 1-1, since L(Rp)=p but L(1)=0.

R is 1-1 but not onto; its image RV = xV is a proper

subspace isomorphic to V (since Im R ~= V/(Ker R) = V/0 = V);
note: dim V is infinite iff V is isomorphic to a proper subspace,
just as a set is infinite iff it's isomorphic to a proper subset.

In fact this equivalence between 1-1 and onto maps comes from
a general pigeonhole principle in lattices - in particular it
holds for any algebraic structure of finite height (in its
lattice of subalgebras) -- see my soon to appear post .

The shift operator is of fundamental importance in linear algebra,
e.g. see the review of Fuhrmann's book in my prior post .

The hotel is usually called "Hilbert's Hotel" - named after the great
mathematician David Hilbert, who often mentioned it in his popular
lectures; cf. Rucker: Infinity and the Mind p. 73, where it is also
mentioned that the Polish science fiction writer Stanislew Lem once
wrote a short story about Hilbert's Hotel, which appears in Vilenkin's
book Stories About Sets. Rudy Rucker's book is one of the best popular
expositions of most all aspects of infinity - highly recommended  .

-Bill Dubuque

### Andrew Martin

Apr 25, 1999, 3:00:00 AM4/25/99
to

Bill Dubuque wrote in message ...

>The Hilbert Hotel is a classical popular realization of the first
>infinite ordinal w. It's a nice example of the dichotomy between
>the finite and the infinite: the right-shift operator n -> n+1
>is 1-1 ..<<<SNIPPY>>>>...orphic to V (since Im R ~= V/(Ker R) = V/0 = V);

>note: dim V is infinite iff V is isomorphic to a proper subspace,
>just as a set is infinite iff it's isomorphic to a proper subset.
>
<<<SNIPPY>>>

Oh I forgot to tell you, I'm just a lastyear highschool 16 year old guy
doing normal maths stats and maths calc. You lost me after the first
sentence. :)

### Bill Dubuque

Sep 2, 2010, 11:14:26 AM9/2/10
to
Bill Dubuque wrote  to sci.math on 1999/04/24:

>
> The Hilbert Hotel is a classical popular realization of the first
> infinite ordinal w. It's a nice example of the dichotomy between
> the finite and the infinite: the right-shift operator n -> n+1
> is 1-1 but not onto the infinite set w; contrast this with a
> finite set, where a function is 1-1 if and only if it is onto
> (the pigeonhole principle). The exact same contrast occurs between
> finite and infinite dimensional vectors spaces and can again be
> illustrated via shift operators, e.g. if V has infinite dimension
>
> let R = (Right) shift operator: R(a b c d ...) = (0 a b c ...)
> and L = (Left) shift operator: L(a b c d ...) = (b c d e ...)
> so LR = I but RL != I since: RL(a b c d ...) = (0 b c d ...)
>
> This has a simple model in a vector space V of polynomials. Let V
> have basis {1, x, x^2 ...}, so (a b c ...) = a + b x + c x^2 + ...
> with shift operators R p(x) = x p(x), L p(x) = 1/x (p(x)-p(0))
> so that LR = I but RL != I since RL p(x) = p(x)-p(0). Note
>
> L is onto but not 1-1, since L(Rp)=p but L(1)=0.
>
> R is 1-1 but not onto; its image RV = xV is a proper
>
> subspace isomorphic to V (since Im R ~= V/(Ker R) = V/0 = V);

> note: dim V is infinite iff V is isomorphic to a proper subspace,
> just as a set is infinite iff it's isomorphic to a proper subset.
>
> In fact this equivalence between 1-1 and onto maps comes from
> a general pigeonhole principle in lattices - in particular it
> holds for any algebraic structure of finite height (in its
> lattice of subalgebras) -- see my soon to appear post .

Sigh, I forgot to post that until much later, see here:

sci.math, 12 Dec 2007 Re: Proof that A^n = 0 if A is nilpotent

http://math.stackexchange.com/questions/3852

> The shift operator is of fundamental importance in linear algebra,
> e.g. see the review of Fuhrmann's book in my prior post .
>
> The hotel is usually called "Hilbert's Hotel" - named after the great
> mathematician David Hilbert, who often mentioned it in his popular
> lectures; cf. Rucker: Infinity and the Mind p. 73, where it is also
> mentioned that the Polish science fiction writer Stanislew Lem once
> wrote a short story about Hilbert's Hotel, which appears in Vilenkin's
> book Stories About Sets. Rudy Rucker's book is one of the best popular
> expositions of most all aspects of infinity - highly recommended  .

--Bill Dubuque