# Why isn't there a cleaner proof that a finite-dimensional vector space is isomorphic to its double dual?

One of the basic results concerning duality is that a finite-dimensional vector space V is isomorphic to its double dual V**. A sketch of the proof is as follows.

1. Every vector v in V can be thought of as a linear functional on V*. Indeed, given v* in V*, define v(v*) to be v*(v). To distinguish more clearly between V and V** we can rephrase this as follows: there is a map g from V to V**, taking v in V to g(v) in V**, where g(v) is the linear functional on V* defined by the formula g(v)(v*)=v*(v).
2. It is not hard to check (i) that g really is a linear functional and (ii) that the map from v to g(v) is a linear map from V to V**.
3. Given any basis v1,...,vn of V, we can construct a dual basis v1*,...,vn* of V*, defining the linear functional vi* by the formula vi* (a1v1+...+anvn) =ai, or, more slickly, by the formula vi*(vj)=deltaij.
4. It is not hard to check that this really does define a basis of V*.
5. It follows that a finite-dimensional vector space has the same dimension as its dual.
6. It follows that a finite-dimensional vector space has the same dimension as its double dual.
7. Hence, if we can show that the map g:V-- > V** defined earlier has zero kernel, then we automatically know that its image is the whole of V**, and hence that g is an isomorphism.
8. Suppose g(v)=0. From the definition of g this means that v*(v)=0 for every v* in V*. We would like this to imply that v=0.
9. Considering the contrapositive statement instead, we find ourselves wanting to prove that if v is a non-zero vector in V then there must be a linear functional v* in V* such that v*(v) is not 0.
10. Let v1,...,vn be a basis of V. Then v=a1v1+...+anvn for some sequence of scalars a1,...,an. If v is not 0, then some ai must be non-zero. But then vi*(v)=ai is non-zero, so vi* is the desired linear functional.

What is there to dislike about the above proof? The first two steps are obviously very natural - the fact that a vector in V is also a linear functional defined on V* is obviously the reason for the isomorphism between V and V**. However, all it actually proves is that there is a natural embedding of V into V**, so to complete the proof one must show that the kernel is zero and the image is all of V**.

The nicest way to do that would surely be to take an arbitrary element of V**, that is, linear functional defined on V*, and come up with some vector v in V that has the same effect. In that way we would have shown directly that g is invertible. Instead we proceed indirectly. By taking a basis of V and taking dual bases twice, we establish that V has the same dimension as V**, so that instead of constructing an inverse for the embedding g, we show that g has zero kernel and use a bit of theory to tell us that that is enough. At the heart of that theory is the fact that every n by n matrix can be put into reduced row-echelon form, which means that the proof cannot be regarded as mere `abstract nonsense' in the way that steps 1 and 2 can.

Suppose we were trying to write out a complete proof from first principles and wanted to make it as simple as possible. Could we find one that did not involve Gaussian elimination in some form or another? And is it really necessary to choose a basis of V at various points in the argument? The statement of the theorem does not involve dimension, except in the innocuous looking qualification that V should be finite-dimensional, so one might think that there was a way of doing things more canonically, and avoiding the unpleasantly arbitrary choice of basis.

### Infinite-dimensional vector spaces.

The easy way to see that there is no truly simple proof that V is isomorphic to V** is to observe that the result is false for infinite-dimensional vector spaces. For example, let V be the space of all infinite real sequences with only finitely many non-zero terms. It is not hard to show that V* can be identified with the space of all infinite sequences (if v* is such a sequence and v belongs to V then v*(v) is the `scalar product' of v* and v, which is a finite sum).

An immediate difference between the situation we are in now and the finite-dimensional one is that V* does not have the same dimension as V. By this I mean that, whereas V has a countable basis (consisting of sequences that are 1 in exactly one place and 0 everywhere else), a diagonal argument shows that V* cannot be spanned by countably many sequences. (A sketch of this is as follows. If you have some infinite real sequences v1,v2,... then construct a new sequence v in such a way that the first two terms of v mean that it is not a multiple of v1, the first three mean that it is not a linear combination of v1 and v2, and so on. The resulting sequence is not a finite linear combination of the vi.)

Unfortunately, we must now rely on a further piece of theory: every vector space has a basis. If the vector space V is infinite-dimensional, then this means that V contains a subset B such that every vector v in V is a linear combination of (finitely many) elements of B, and any finite subset of B is linearly independent in the usual sense. This statement requires the axiom of choice for its proof. It is also true that every linearly independent subset of V can be extended to a basis.

Using this second fact, let us find a basis B of V* above and let us suppose that B contains the sequence v=(1,1,.....) and all the sequences en, where en is 1 in the nth place and 0 elsewhere. Define a linear functional f on V* by setting f(v)=1 and f(w)=0 for every other w in B. (Notice that this really does define a linear functional on V*, since every element of V* is a linear combination of elements of B in a unique way.) Since f(en)=0 for every n, f cannot be the image of an element of V under the natural embedding of V into V**.

This shows that the natural embedding is not an isomorphism, and that is enough to indicate that there will not be an easy proof for finite-dimensional vector spaces (since such a proof would use the natural embedding). One can show that there is no isomorphism at all by observing that what I was really doing in the previous paragraph was constructing an element of V** corresponding to something like the dual basis of B (it won't be a basis, as it doesn't span V**). I could have found uncountably many of these elements, since B is uncountable, and they are linearly independent (it can easily be shown), from which it follows that the dimension of V** is uncountable. Hence, V and V** are not isomorphic.

This justifies the use of Step 7 in the proof. One might still ask for Step 10 to be simplified. Recall that we wished to prove that if v was a non-zero vector in V then there must be a linear functional v* in V* such that v*(v) is not 0. To do this we took a basis of V and used an element of the dual basis. However, the choice of basis was non-canonical and the statement looks pretty obvious. Is there not some more direct way of showing that v* exists?

This question I will not answer in detail, since to justify what I say does involve knowing more about logic and set theory than I am presupposing. However, let me merely comment that if V is an arbitrary infinite-dimensional vector space, then it is not even obvious how to show that V* contains anything apart from the zero functional, unless one is prepared to take a basis of V, for which the axiom of choice is necessary. Otherwise, one somehow has nothing to use to build such a functional. In fact, the situation is worse still: there are models of set theory without the axiom of choice that contain infinite-dimensional vector spaces V such that V* really is {0}.

Thus the axiom of choice is needed to prove that V* is non-trivial. My criticism of Step 10 was that it involved a non-canonical choice of a basis. Since the role of the axiom of choice in proofs is to make infinitely many non-canonical choices, and the axiom of choice was necessary for the infinite-dimensional analogue of what we were trying to prove, we have a clear indication that, for the finite-dimensional statement we actually proved, it was necessary to make finitely many non-canonical choices. Thus, we should not search for a neater proof.

### Duality of groups.

Let G be a finite group. A character on G is a homomorphism from G to the complex numbers. Two characters on G can be multiplied together to form a third, since if u and v are characters, then

uv(gh)=u(gh)v(gh)=u(g)u(h)v(g)v(h)=u(g)v(g)u(h)v(h)=uv(g)uv(h).

In this way, the characters form a group themselves, and this group is, for obvious reasons, Abelian. It is known as the dual group G* of G.

One can of course go on and define the double dual of G. Moreover, there is always a natural embedding from G to its double dual, since if g is an element of G, we can think of it as a character on G* by setting g(u) to be u(g). Thus, up to this point there is a very close analogy with duality in vector spaces. It turns out that if G is itself Abelian, then the natural embedding from G to G** is an isomorphism.

Obviously the condition that G should be Abelian is necessary for this. In the other direction, if G is very far from being Abelian then the dual group G* may be trivial. Suppose, for example, that G is a non-Abelian simple group, and let u be a character defined on G. We know that the kernel of u is a normal subgroup, so either it is the whole of G (which means that u(g)=1 for every g in G) or it is the identity, which means that u is injective. In the second case, we know that u defines an isomorphism from G to its image u(G), which is impossible as u(G), being a subgroup of the complex numbers, is Abelian.

Thus, when G is non-Abelian and simple, the dual group of G is trivial, from which it follows in particular that G is certainly not isomorphic to G**. This gives another reason for the proof for finite-dimensional vector spaces being somewhat complicated. If it could be proved in some easy formal way that the natural embedding of a finite-dimensional vector space V into its double dual was an isomorphism, then the same argument might well show that the natural embedding of G into G** was an isomorphism as well. What is more, since the embedding of G into G** is not even necessarily injective, this also indicates that Step 10 of the vector-space argument had to be complicated, and does so without appealing to the axiom of choice.

It is worth remarking at this point that the proofs that G is isomorphic to G** when G is a finite Abelian group, and of the simpler seeming statement that G* is non-trivial, are not pleasant. In particular, they rely on the structure theorem, which says that every finite Abelian group is isomorphic to a product of cyclic groups. Since it is not isomorphic in a unique way, using this theorem involves making non-canonical choices. Again, some sort of complication is to be expected if we are to distinguish between Abelian and non-Abelian groups.