Just do it: a useful mathematical technique

Sometimes, if you are asked to find a mathematical object with certain properties, the best way to do it is simply to jump in and start building, and carry on until something forces you to stop. If you are lucky, this never happens and you have `just done it'. (I first heard the slogan `just do it' from Imre Leader. He tells me that he and a few others invented it after supervisions with Béla Bollobás, who would occasionally ask, when they got stuck, `Why don't you just do it?') In this page, I shall give a few examples of the method at work, but not too many, as I don't want to spoil for others the pleasure of coming up with just-do-it proofs of their own.


Interchanging limits.

A standard undergraduate exercise is to come up with a double sequence (amn) with the property that the limits limmlimnamn and limnlimmamn both exist but are different. In fact, one can find amn such that limnamn=1 for every m and limmamn=0 for every n.

It is quite common, if one is seeing this for the first time, to try to think of a formula for amn that will do the job. Indeed, this can be done quite easily if one starts with the general idea of setting amn=(n+f(m))/(n+g(m)), or something similar. This guarantees that the limits over n will be 1, while leaving us total freedom to choose f and g. But then all one has to do is let f and g tend to infinity, while f/g tends to zero. So, for example, (n+m)/(n+m2) will do fine.

The just-do-it approach to this problem is very different. Let's try to choose values for the amn, not all at once, in such a way that the various constraints are satisfied. If we think of (amn) as an infinite matrix, then these constraints are that the terms of the matrix tend to 1 along all rows, and to 0 down all columns. So let us take the rows and columns one after another, making sure that this is always the case. An obvious order to put them in is first row, first column, second row, second column and so on. (In fact, any bijection between N and the set of rows and columns defines an order for which this argument works.)

The simplest way to make the first row tend to 1 is to make all its entries 1. The simplest way to make the first column tend to zero, given our choice for the first row, is to make it start with a 1 and thereafter be 0. The simplest choice for the second row is now (0,1,1,1,...), for the second column now (1,1,0,0,....) and so on. Each time we look at a new row or column, we have chosen only finitely many of the terms in it, so we can make the rest 1 (for a row) or 0 (for a column). As is easy to see, the resulting formula for (amn) is 0 if m > n and 1 otherwise. More important than this formula, though, is the understanding we gain from the proof. It shows that cleverness is not needed to solve the problem (as one might mistakenly think if one was merely presented with the formula (n+m)/(n+m2)).

The reason for this was that the constraints that had to be satisfied were too weak to force us to be ingenious. This is not true of all problems: often one tries a just-do-it approach and finds that after a while one has made too many choices for it to be possible to satisfy the remaining constraints.


Avoiding an infinite monochromatic arithmetic progression.

Suppose you are asked to colour the positive integers with two colours in such a way that there is no infinite arithmetic progression all of one colour. One quite nice solution I have seen to this is to colour the first one red, the next two blue, the next three red, the next four blue and so on. Given an infinite arithmetic progression with common difference d, it will be unable to jump over the blocks of red once they become wider than d, and the same for blue.

Once again, this is not a particularly difficult problem, but the just-do-it approach makes it even easier. There are countably many arithmetic progressions (one must choose a starting point and a common difference, and that's it). So enumerate them in an arbitrary way and for each one in turn colour one of its terms red and the other blue. At each stage, one has coloured only finitely many integers, but one has an infinite arithmetic progression to mess up, so, as before, nothing can go wrong .

A small final remark: this problem can also be solved by a very simple probabilistic argument. If you choose the colours randomly, then the probability that any given AP is all one colour is zero, so the expected number of such APs is zero, so it must be possible for there to be none of them.


A large well-separated set of points.

Suppose one is asked to prove that it is possible to fit r3/16 (disjoint) spheres of radius 1 inside a sphere of radius r, where r > 2. Perhaps the most obvious approach is to try to arrange the spheres in a nice pattern and do a few calculations to check that this works. The just-do-it approach is less efficient, but has the advantage of involving only the very simplest of calculations.

What is a just-do-it approach to this problem? It is simply to place the small spheres one by one inside the large sphere, putting them anywhere where they will fit, until there is no room left. This is of course equivalent to choosing the centres of the small spheres. Suppose we have chosen the centres x1,...,xk of the first k spheres. What constraints does this put on xk+1? Well, it must lie within r-1 of the centre of the large sphere, and it must not lie within 2 of any of the points x1,...,xk. Otherwise, anything will do. Thus, writing V for the volume of a sphere of radius 1 (which we don't even need to know) it is constrained to lie inside a set of volume (r-1)3V and not to lie inside the union of k sets of volume 8V. This union has total volume at most 8Vk (it will be less if some of the spheres of radius 2 about the xi overlap), so if we can't choose xk+1, then 8Vk > (r-1)3V, from which it follows that k > (r-1)3/8, which is at least r3/16.

Aside from its simplicity, the above proof has the big advantage that it generalizes straightforwardly to higher dimensions and/or different convex bodies, where it usually gives far better results than any known method for explicitly choosing the centres.


A set in the plane that intersects every line in exactly two points.

This is a famous example, one of a number of strange constructions discovered by the Polish school of logicians in the first half of the twentieth century. It is slightly less elementary in that it needs the axiom of choice. (Whether you can do without the axiom of choice - in particular, whether there is a Borel set that works - is an open problem.)

The idea is this. Just as with the first two examples above, one wants to do something in turn to every one of a large number of objects in such a way that each time one arrives at a new object, one has not made one's task impossible. In this case, we would like to consider all the lines in turn, and make sure that they have two points in them, taking care that we never accidentally choose three collinear points.

The axiom of choice tells us that we can well-order all the lines in the plane (there are the same number of these as there are real numbers) in such a way that the number of predecessors of any given line is strictly less than the number of real numbers. Once we have done this, we go through the lines in order, choosing points in our set in order to make sure that each line contains exactly two. Suppose we reach line L. Either it has two points in it already, in which case we do nothing, or we choose points in order to make the total number up to two. The one thing we mustn't do is choose our points in such a way that there are three in a line. But the number of points chosen so far is less than 2aleph0, and at most one of these lies in L. Hence, the number of points in L that we are forbidden to choose - at most one for each pair of points so far chosen - is also less than 2aleph0, from which it follows that there are plenty of points we can choose.


When are just-do-it proofs appropriate?

In each of the examples above, we had something, call it X, to build, which had to satisfy a large number of similar constraints. It was possible to put the constraints in some (usually fairly arbitrary) order and build X in stages, at each stage considering only the first few constraints and not having to worry too much that the decisions we made would force us to violate later ones. As Imre Leader points out to me (for which I am grateful - his comment resulted in this section) the essence of the method is that one should split one's task up into small and easy subtasks. For example (given by him), if one wishes to construct a sequence (an) with certain properties, such that an tends to infinity, then one can first try to get every an at least 1 when n> N1, then at least 2 when n> N2 and so on. If one thinks about the problem this way, one will sometimes be able to see that the solution is almost trivial and does not demand that one come up with a clever formula for an.


Exercises.

1. (i) Show that there is an order-preserving bijection between Q (the set of rationals) and Q-{0}. (ii) Give necessary and sufficient conditions on a set A of real numbers for it to be order-isomorphic to Q in this sense.

2. Let (an) be a sequence such that Sumnan diverges. Prove that there is a sequence (bn) such that bn/an tends to zero and Sumnbn diverges.

3. Does there exist a sequence (qn) of rational numbers such that qn tends to infinity and yet for every y> 0 there are at most finitely many natural numbers m such that my belongs to the set {qn: n in N}?

4. Does there exist a function f:R-->R such that |f(x)-f(y)|< |x-y| for any two distinct numbers x,y but there is no x such that f(x)=x?

5. Show that there is a subset A of the plane such that, for any rigid motion T, the intersection of A with TA contains exactly one point. (Note that a straight line would work if translations were not allowed.)


And finally ...

Feri Farassat kindly emailed me this picture of Kolmogorov . Apparently the writing next to his head is a famous quotation from him that means, "Dream for just a second and then do it!"