The relationship between theory and computation in linear algebra.

There are two very different ways of thinking about the results of a typical linear algebra course. The first is theoretical: one enjoys the purely abstract notion of a vector space, with neat proofs of theorems to do with bases, dimension, linear maps, kernels, images, ranks and so on. The second is computational: for example, one might be given some explicit vector space (such as the kernel of a map defined by a matrix) and asked to exhibit a basis. It is quite possible to be completely on top of the theory and yet completely at sea when one is called upon to show how the theory relates to a particular example. Many students have found themselves in this position at one time or another.

It is therefore a very good idea, when studying linear algebra for the first time, to pay close attention to the computational side of the subject. (The theoretical side, one can almost guarantee, will not be neglected by your lecturer.) One way to do this is simply to learn algorithms for solving the kinds of questions you are asked to solve (which usually boil down to solving some simultaneous equations). Better still is to see how the theoretical proofs often give rise to those algorithms if you work through them with reference to a specific example. (Conversely, the algorithms, once tidied up, give rise to the theoretical proofs.) Many years ago, somebody said to me, `Of course, the Steinitz exchange lemma is basically just Gaussian elimination'. Though I believe this statement, I have never quite got round to seeing for myself why it is true. However, I shall soon be teaching the Steinitz exchange lemma, and intend to put this situation to rights. I shall then add a section on why the Steinitz exchange lemma and Gaussian elimination are basically the same thing.