Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

Proving infinite dimensional vector spaces are not algebraically reflexive

1,173 views
Skip to first unread message

Arturo Magidin

unread,
Sep 24, 2010, 10:51:14 AM9/24/10
to
I assume the Axiom of Choice holds throughout.

Suppose F is a field and kappa is an infinite set. Then the vector
space F^(kappa) (direct sum of kappa copies of F) is infinite
dimensional, of dimension |kappa|, and its dual is F^kappa (direct
product of |kappa| copies of F).

If |F|<|kappa|, then it is easy to see that the two spaces are not
isomorphic, since the first has cardinality |kappa|, while the latter
has cardinality 2^|kappa|.

But if |F|>=|kappa|, then the cardinality argument does not work:
F^(kappa) has cardinality |F|, while |F^kappa|=|F|^|kappa|, and this
could be equal to |F| (e.g., |F|=2^aleph_0, |kappa|=aleph_0).

It is trivial to see that the dual of the standard basis of F^(kappa)
is not a basis for the dual (F^(kappa))*; but this does not show that
the two spaces do not have the same dimension.

I seem to remember that I used to know a proof that in the infinite
dimensional case, the vector space is never isomorphic to its dual;
this is asserted in Friedberg, Insel, and Spence. But I can't seem to
remember it.

Is there an easy way (or at least a straightforward way) to construct
a linearly independent subset of F^kappa with strictly more than |F|
vectors? Or to prove the dimension is strictly larger?

--
Arturo Magidin

quasi

unread,
Sep 24, 2010, 1:31:30 PM9/24/10
to

As an idea for a possible proof (I haven't checked the details),
suppose the direct sum F^(kappa) is isomorphic to the direct product
F^kappa, where kappa is an infinite set. Let K be the prime field of
F. Then wouldn't an isomorphism of F^(kappa) onto F^kappa necessarily
induce an isomorphism of K^(kappa) onto K^kappa? If so, then for the
case of nonzero characteristic, we have a clear contradiction. For the
case of zero characteristic, assuming |Q| >= |kappa|, then kappa is
countable, so we only need to consider the case kappa = N. But Q^(N)
is clearly countable, whereas Q^N is clearly uncountable,
contradiction.

Is my reduction to the prime field case actually valid? I'm not sure.

quasi

quasi

unread,
Sep 24, 2010, 1:52:47 PM9/24/10
to

Forget it -- I don't think there's any easy way to get the reduction
to the prime field.

Instead, I think the whole thing is just a straight cardinality
argument.

If kappa is infinite, the direct sum F^(kappa) has dimension |kappa|
(just produce a basis), whereas the direct product F^kappa has
dimension at least |2^kappa| (just produce |2^kappa| linearly
independent elements).

quasi

Arturo Magidin

unread,
Sep 24, 2010, 12:55:44 PM9/24/10
to
On Sep 24, 12:31 pm, quasi <qu...@null.set> wrote:
> On Fri, 24 Sep 2010 07:51:14 -0700 (PDT), Arturo Magidin
>
>
>
> <magi...@member.ams.org> wrote:
> >I assume the Axiom of Choice holds throughout.
>
> >Suppose F is a field and kappa is an infinite set. Then the vector
> >space F^(kappa) (direct sum of kappa copies of F) is infinite
> >dimensional, of dimension |kappa|, and its dual is F^kappa (direct
> >product of |kappa| copies of F).
>
> >If |F|<|kappa|, then it is easy to see that the two spaces are not
> >isomorphic, since the first has cardinality |kappa|, while the latter
> >has cardinality 2^|kappa|.
>
> >But if |F|>=|kappa|, then the cardinality argument does not work:
> >F^(kappa) has cardinality |F|, while |F^kappa|=|F|^|kappa|, and this
> >could be equal to |F| (e.g., |F|=2^aleph_0, |kappa|=aleph_0).

Actually, |F|=|kappa| is not a problem, since |kappa|^|kappa| = 2^|
kappa| for all infinite kappa. So it's just |F|>|kappa|

[...]

> As an idea for a possible proof (I haven't checked the details),
> suppose the direct sum F^(kappa) is isomorphic to the direct product
> F^kappa, where kappa is an infinite set. Let K be the prime field of
> F. Then wouldn't an isomorphism of F^(kappa) onto F^kappa necessarily
> induce an isomorphism of K^(kappa) onto K^kappa? If so, then for the
> case of nonzero characteristic, we have a clear contradiction. For the
> case of zero characteristic, assuming |Q| >= |kappa|, then kappa is
> countable, so we only need to consider the case kappa = N. But Q^(N)
> is clearly countable, whereas Q^N is clearly uncountable,
> contradiction.
>
> Is my reduction to the prime field case actually valid? I'm not sure.

Thanks, but I don't see how to "induce" an isomorphism; for example,
the map of R to itself given my multiplication by pi is a linear
isomorphism from R to itself, but it does not map Q to itself, so how
would this map "induce" an isomorphism from Q to itself? (Of course,
this is the finite dimensional case, not the infinite dimensional, but
I would think we run into the same problem).

Intuitively, you really want to have a direct argument, taking the
vector spaces over K and then extending the base field, arguing that
because they are not isomorphic in the case of K, then extending the
base field will also make them not isomorphic. But I still run into a
problem:

Because K^(kappa), K^kappa, and F are free over K, tensoring with F is
exact, so K^(kappa)(x)_K F is free over F the same dimension as
K^(kappa) is over K (namely kappa), and K^kappa(x)_K F is free over F
of the same dimension as K^kappa over K (namely, more than kappa).
K^(kappa) (x)_K F is isomorphic to F^(kappa); but is K^kappa (x)_K F
isomorphic to F^kappa? I don't know.

(This is essentially the same problem: how do we get an induced
isomorphism "down" to K?)

In any case, using tensors would be above the level of the course in
question at this point, unfortunately.

--
Arturo Magidin

Arturo Magidin

unread,
Sep 24, 2010, 12:58:31 PM9/24/10
to

And my question was precisely: how do we produce more than |kappa|
linearly independent elements? Do you have any in mind, say, for the
space of real sequences?

--
Arturo Magidin

quasi

unread,
Sep 24, 2010, 2:30:42 PM9/24/10
to

I think it hinges on the following claimed general property of
infinite sets:

Claim:

If S is an infinite set, then the power set 2^S has a chain of
distinct subsets of S such that the chain has cardinality |2^S|.

I don't immediately see how to prove the above claim. Of course, I'm
fairly sure it needs the Axiom of Choice, but we're assuming that, so
that's not an issue. Anyway, assuming the truth of the above claim,
you can get a set of |2^kappa| linearly independent elements in the
direct product F^kappa, by taking a chain of distinct subsets of
kappa, such that the chain has cardinality |2^kappa| and for each
element X of the chain, associate the vector with ones in the
positions corresponding to the elements of X and zeros in all other
positions. Those vectors are clearly linearly independent, and there
are |2^kappa| of them. Thus, assuming my "chain claim" is true, it
remains to prove that. The new problem may not be any easier, but now
it's more of a straight set-theoretic problem.

quasi

quasi

unread,
Sep 24, 2010, 2:37:01 PM9/24/10
to

Never mind -- the "chain claim" is false for S = N.

In fact, I think for any infinite set S, the cardinality of a maximal
chain of subsets is always |S|, not |2^S|.

Oh well -- back to the drawing board.

quasi

quasi

unread,
Sep 24, 2010, 3:09:25 PM9/24/10
to

Ok, I think this will work ...

Suppose we have an isomorphism f from F^kappa onto F^(kappa).

Then f induces a one-to-one map from the set of finite-dimensional
subspaces of F^kappa to the set of finite-dimensional subspaces of
F^(kappa). But F^(kappa) has |kappa| finite-dimensional subspaces,
whereas F^(kappa) has |2^(kappa)| such spaces.

In fact, you can use just one-dimensional subspaces.

Did I clinch it now?

quasi

quasi

unread,
Sep 24, 2010, 3:13:42 PM9/24/10
to

the above should be: whereas F^kappa has |2^kappa| such subspaces.

Arturo Magidin

unread,
Sep 24, 2010, 2:33:41 PM9/24/10
to

[...]

> >Ok, I think this will work ...
>
> >Suppose we have an isomorphism f from F^kappa onto F^(kappa).
>
> >Then f induces a one-to-one map from the set of finite-dimensional
> >subspaces of F^kappa to the set of finite-dimensional subspaces of
> >F^(kappa). But F^(kappa) has |kappa| finite-dimensional subspaces,
> >whereas F^(kappa) has |2^(kappa)| such spaces.

I don't think your computation for F^(kappa) is correct.

Again, consider the case of F the real numbers, kappa the natural
numbers, so that F^(kappa) are the almost null sequences. For each
alpha in R, the subspace generated by (1,alpha,0,0,0,...) is a one
dimensional subspace of F^(kappa). And <(1,alpha,0,0,...)> = <(1,beta,
0,0,...> if and only if there exists r such that (1,alpha,0,0,....) =
r(1,beta,0,0,....), if and only if alpha = beta. So I have at least |
R| distinct finite dimensional (one-dimensional) distinct subspaces of
R^(N), not merely |N| of them.

I think you were thinking about finite subsets of a given basis (which
would indeed be |kappa|), but there are of course lots of other finite
dimensional subsets.

(Remember that we are in the case where |F|>|kappa|).

--
Arturo Magidin

quasi

unread,
Sep 24, 2010, 3:57:20 PM9/24/10
to

Hmmm ...

So assume |F| >= |kappa|, where kappa is infinite.

The number of one-dimensional subspaces of F^(kappa) is

|F| * |kappa|

whereas the number of one-dimensional subspaces of F^kappa is

|F^kappa| * |2^kappa|

Not much progress since the only case it clinches is |F| = |kappa|.

quasi

quasi

unread,
Sep 24, 2010, 4:14:50 PM9/24/10
to

Last try -- this argument is _really_ simple -- if it doesn't work,
I'll give up.

Suppose there is an isomorphism f from F^kappa onto F^(kappa).

Consider the set S of elements of F^kappa with exactly one coordinate
of 1, and the rest 0. The vector subspace <S> of F^kappa is not the
whole space, but since all the elements of S are linearly independent,
the vector subspace <f(S)> of F^(kappa) _is_ the whole space, since
the set f(S) is a set of linearly independent elements of cardinality
|kappa|.

Now I think we have it.

Yes?

quasi

Arturo Magidin

unread,
Sep 24, 2010, 4:24:45 PM9/24/10
to
On Sep 24, 3:14 pm, quasi <qu...@null.set> wrote:
> On Fri, 24 Sep 2010 14:57:20 -0500, quasi <qu...@null.set> wrote:
> >On Fri, 24 Sep 2010 11:33:41 -0700 (PDT), Arturo Magidin

No; in the infinite dimensional case, beta linearly independent and |
beta|=dim(V) does *not* imply beta is a basis for V. For example,
{x,x^2,x^3,x^4,...} is a linearly independent subset of R[x], of
cardinality dim(R[x])=aleph_0, but it's not a basis.

I found one sketch in MathOverflow:

http://mathoverflow.net/questions/13322/slick-proof-a-vector-space-has-the-same-dimension-as-its-dual-if-and-only-if-it

(answer by Andrea Ferreti), but I don't see how to justify the
existence of the dual elements mentioned in the comments.

(The idea is similar to your original one: prove the prime field case
(done). Then show that if a field K has the property that a vector
space over K has the same dimension as its dual if and only if it is
finite dimensional, and F is a field extension of K, then F also has
the property. If kappa is infinite, then K^(kappa) has strictly
smaller dimension than K^kappa; let E be a basis for K^kappa; the
embedding K-->F gives an embedding K^kappa-->F^kappa; now think about
E as elements of F^kappa. You only need to show that E is linearly
independent over F, since the dimensions of F^(kappa) and of K^(kappa)
are equal to each other (both are |kappa|), so |E|>kappa by the
assumption. Andrea's sketch is to take f_1,...,f_m in E; then find
dual elements in K^(kappa), by which I understand elements v_1,...,v_m
in K^(kappa) such that f_j(v_i) = delta_{ij}; since v_1,...,v_m are
also in F^(kappa), evaluating a linear combination g=r_1*f_1+...
+r_m*f_m at each v_i shows that if g=0 then r_1=...=r_m=0, giving
linear independence over E.

But I don't see how to justify the existence of the dual elements in
this setting).

--
Arturo Magidin

--
Arturo Magidin

Bill Dubuque

unread,
Sep 24, 2010, 6:49:02 PM9/24/10
to
Arturo Magidin <mag...@member.ams.org> wrote:
>
> I seem to remember that I used to know a proof that in the infinite
> dimensional case, the vector space is never isomorphic to its dual;
> this is asserted in Friedberg, Insel, and Spence. But I can't seem to
> remember it.

Let V be a vector space over F, with dual V', of dimensions d, d' resp.

LEMMA dim V infinite => |V| = d |F| (Proof: easy)

Notice |V'| = |F|^d since a functional is uniquely determined on a basis.

Lemma => |V'| = d'|F| = max(d',|F|), so to show d' = |F|^d it suffices to
show that d' >= |F|. Let e_i be a countable subset of a basis of V. Then
functionals L_c : e_i -> c^i, for c != 0 in F are linearly independent,
as is easily checked, so d' >= |F|, hence d' = |F|^d

--Bill Dubuque

quasi

unread,
Sep 24, 2010, 8:50:52 PM9/24/10
to
On 24 Sep 2010 18:49:02 -0400, Bill Dubuque <w...@nestle.csail.mit.edu>
wrote:

Nice proof!

quasi

achille

unread,
Sep 24, 2010, 8:41:21 PM9/24/10
to
On Sep 25, 6:49 am, Bill Dubuque <w...@nestle.csail.mit.edu> wrote:

Wow, this proof is really clean! even I can understand it ;-p

Arturo Magidin

unread,
Sep 24, 2010, 11:26:18 PM9/24/10
to
On Sep 24, 5:49 pm, Bill Dubuque <w...@nestle.csail.mit.edu> wrote:

I guess I'm being dense this week, but I don't see the "easily
checked" part.


--
Arturo Magidin

quasi

unread,
Sep 25, 2010, 12:49:25 AM9/25/10
to

I can do that one (to partly redeem myself).

Let c_1, ..., c_n be distinct nonzero elements of F and suppose

a_1*L_(c_1) + ... + a_n*L_(c_n) = 0

for some a_1, ..., a_n in F.

Then for all i in N,

a_1*(c_1)^i + ... + a_n*(c_n)^i = 0

There are infinitely many such equations, but we only need the first n
of them to prove that a_1, ..., a_n are all zero. Simply regard it as
a matrix equation of the form Ca = 0 where C is the n x n matrix whose
i'th row is (c_1)^i, ..., (c_n)^i and a is the column vector (a_1,
..., a_n). Then note that the determinant of C equals

+/-(c_1*...*c_n)*product(c_i - c_j, for all distinct pairs i,j)

so C is nonsingular.

quasi

Arturo Magidin

unread,
Sep 25, 2010, 12:11:21 AM9/25/10
to

Ah; yes. Somehow, I was trying to get equations with the c_i as the
unknowns and try to get a contradiction (some sort of polynomial
equation with too many roots), and I was just not seeing it. Thanks.

And of course, thanks to Bill too.

--
Arturo Magidin

Bill Dubuque

unread,
Sep 25, 2010, 1:07:06 AM9/25/10
to
quasi <qu...@null.set> wrote:
>Arturo Magidin <mag...@member.ams.org> wrote:
>>Bill Dubuque <w...@nestle.csail.mit.edu> wrote:

>>> Arturo Magidin <mag...@member.ams.org> wrote:
>>>
>>>> I seem to remember that I used to know a proof that in the infinite
>>>> dimensional case, the vector space is never isomorphic to its dual;
>>>> this is asserted in Friedberg, Insel, and Spence. But I can't seem to
>>>> remember it.
>>>
>>> Let V be a vector space over F, with dual V', of dimensions d, d' resp.
>>>
>>> LEMMA dim V infinite => |V| = d |F| (Proof: easy)
>>>
>>> Notice |V'| = |F|^d since a functional is uniquely determined on a basis
>>>
>>> Lemma => |V'| = d'|F| = max(d',|F|), so to show d' = |F|^d it suffices to
>>> show that d' >= |F|. Let e_i be a countable subset of a basis of V. Then
>>> functionals L_c : e_i -> c^i, for c != 0 in F are linearly independent,
>>> as is easily checked, so d' >= |F|, hence d' = |F|^d
>>
>> I guess I'm being dense this week, but I don't see the "easily checked" part
>
> I can do that one (to partly redeem myself).
>
> Let c_1, ..., c_n be distinct nonzero elements of F and suppose
>
> a_1*L_(c_1) + ... + a_n*L_(c_n) = 0
>
> for some a_1, ..., a_n in F. Then for all i in N,
>
> a_1*(c_1)^i + ... + a_n*(c_n)^i = 0
>
> There are infinitely many such equations, but we only need the first n
> of them to prove that a_1, ..., a_n are all zero. Simply regard it as
> a matrix equation of the form Ca = 0 where C is the n x n matrix whose
> i'th row is (c_1)^i, ..., (c_n)^i and a is the column vector (a_1,
> ..., a_n). Then note that the determinant of C equals
>
> +/-(c_1*...*c_n)*product(c_i - c_j, for all distinct pairs i,j)
>
> so C is nonsingular.

Yes, that's a standard Vandermonde matrix argument, e.g. see my post [1].

More simply, it follows immediately from the obvious fact that the
formal power series Sum (cx)^i = 1/(1 - cx), for all c != 0 in F
are F-linear independent since they have distinct poles.

--Bill Dubuque

[1] http://groups.google.com/group/sci.math/msg/2b82d0fb14429974

0 new messages