Does mathematics need a philosophy?


Introduction


There is a philosophical doctrine known as bialetheism, with, apparently, many adherents in Australia, who take the view that there can be true contradictions. I do not wish to defend this view, but nevertheless in the next 45 minutes I will be arguing, sincerely, that mathematics both does and does not need a philosophy. Of course, the apparent contradiction can in my case be resolved in a boringly conventional way: the statement "Mathematics needs philosophy" has at least two reasonable interpretations, and my contention is that one of them is false and another is true. But before I tell you what these interpretations are, I would like to say just a bit about what I hope to achieve in this talk, since as the first speaker in the series I have no precedents to draw on and have therefore had to decide for myself what kind of talk would be appropriate.

First, as you can see, I am reading from a script, something I would never do when giving a mathematics lecture. This, I fondly imagine, is how philosophers do things, at least some of the time, and it is how I prefer to operate when I need to choose my words carefully, as I do today.

Secondly, I would like to stress that it is not my purpose to say anything original. I have read enough philosophy to know that it is as hard to be an original philosopher, at least if one wishes to be sensible at the same time, as it is to be an original mathematician. When I come across a philosophical problem, I usually know pretty soon what my opinion is, and how I would begin to defend it, but if I delve into the literature, I discover that many people have had similar instincts and have worked out the defence in detail, and that many others have been unconvinced, and that whatever I think has already been labelled as some "-ism" or other. Even if my tone of voice occasionally makes it sound as though I think that I am the first to make some point, I don't. I have done a bit of remedial reading and re-reading over the last couple of weeks, but there are many large gaps and I will often not give credit where it is due, and for that I apologize to the philosophers here.

Thirdly, I am very conscious that I am talking to an audience with widely varying experience of mathematics and philosophy, so I hope you will be patient if quite a lot of what I say about your own discipline seems rather elementary and old hat.

Finally, I shall try to make the talk somewhat introductory. Two weeks have not been enough to develop a fully worked-out position on anything, so, rather than looking at one small issue in the philosophy of mathematics, I shall discuss many of the big questions in the subject, but none of them in much depth. I hope this will give us plenty to talk about when the discussion starts - it certainly ought to as many of these questions have been debated for years.

If you ask a philosopher what the main problems are in the philosophy of mathematics, then the following two are likely to come up: what is the status of mathematical truth, and what is the nature of mathematical objects? That is, what gives mathematical statements their aura of infallibility, and what on earth are these statements about ?

Let me very briefly describe three main (overlapping) schools of thought that have developed in response to these questions: Platonism, logicism and formalism.

The basic Platonist position is rather simple. Mathematical concepts have an objective existence independent of us, and a statement such as "2+2=4" is true because two plus two really does equal four. In other words, for a Platonist mathematical statements are pretty similar to statements such as "that cup is on the table" even if mathematical objects are less tangible than physical ones.

Logicism is an attempt to justify our extreme confidence in mathematical statements. It is the view that all of mathematics can be deduced from a few simple and undeniably true axioms using simple and undeniably valid logical steps. Usually these axioms come from set theory, and they are supposed to form the secure foundation on which the entire edifice of modern mathematics rests. Notice that one can be a Platonist and a logicist at the same time.

Formalism is more or less the antithesis of Platonism. One can caricature it by saying that the formalist believes that mathematics is nothing but a few rules for replacing one system of meaningless symbols with another. If we start by writing down some axioms and deduce from them a theorem, then what we have done is correctly apply our replacement rules to the strings of symbols that represent the axioms and ended up with a string of symbols that represents the theorem. At the end of this process, what we know is not that the theorem is "true" or that some actually existing mathematical objects have a property of which we were previously unaware, but merely that a certain statement can be obtained from certain other statements by means of certain processes of manipulation.

There is another important philosophical attitude to mathematics, known as intuitionism, but since very few working mathematicians are intuitionists, I shall not discuss it today. Let me just say to those mathematicians who know a little about intuitionism that certain aspects of it that seem very off-putting, such as the rejection of the law of the excluded middle or the idea that a mathematical statement can "become true" when a proof is found, should not be dismissed as ridiculous: perhaps on a future occasion we should have a debate here about whether classical logic is the only logic worth considering. For what it is worth, I myself am prepared to countenance other ones.

For the rest of this talk, I shall discuss some fairly specific questions, in what I judge to be ascending order of complexity, with the idea that they will give us a convenient focus for discussion of more general issues of the kind I have been describing. But before I do so, let me give you a very quick idea of where my own philosophical sympathies lie.

I take the view, which I learnt recently goes under the name of naturalism , that a proper philosophical account of mathematics should be grounded in the actual practice of mathematicians. In fact, I should confess that I am a fan of the later Wittgenstein, and I broadly agree with his statement that "the meaning of a word is its use in the language". [Philosophical Investigations Part I section 43 - actually Wittgenstein qualifies it by saying that it is true "for a large class of cases".] So my general approach to a philosophical question in mathematics is to ask myself how a typical mathematician would react to it, and why. I do not mean by this that whatever an average mathematician thinks about the philosophy of mathematics is automatically correct, but rather than try to make precise what I do mean, let me illustrate it by my treatment of the questions that follow.


1. What is 2+2?


The first question I would like to ask is this: could it make sense to doubt whether 2+2=4? Let me do what I promised, and imagine the reaction of a typical mathematician to somebody who did. The conversation might run as follows.

Mathematician: Do you agree that 2 means the number after 1 and that 4 means the number after the number after 2?

Sceptic: Yes.

M: Do you agree that 2=1+1?

S: Yes.

M: Then you are forced to admit that 2+2=2+(1+1).

S: Yes.

M: Do you agree that addition is associative?

S: Yes.

M: Then you are forced to admit that 2+(1+1)=(2+1)+1.

S: Yes I am.

M: But (2+1)+1 is the number after the number after 2, so it's 4.

S: OK I'm convinced.

The general idea of the above argument is that we have some precise definitions of concepts such as 2, 4, "the number after" and +, and one or two simple axioms concerning them, such as the associativity of addition, and from those it is easy to prove that 2+2=4. End of story.

That is, of course, roughly what I think, but before we move on I would nevertheless like to try to imagine a world in which it was natural to think that 2+2=5 and see what that tells us about our belief that 2+2=4.

In such a world, physical objects might have less clear boundaries than they do in ours, or vary more over time, and the following might be an observed empirical fact: that if you put two objects into a container, and then another two, and if you then look inside the container, you will find not four objects but five. A phenomenon like that, though strange, is certainly not a logical impossibility, though one does feel the need for more details: for example, if a being in this world holds up two fingers on one hand and two on the other, how many fingers is it holding up? If you put no apples into a bag and then put no further apples into the bag, do you have one apple? But then why not a tomato?

We are free to invent any answers we like to such questions if we can somehow remain logically consistent, so here is a simple suggestion. Perhaps in the strange world it is really the act of enclosing objects in a container that causes what seems to us to be peculiar consequences. It might be that this requires an expenditure of energy so that the container doesn't just explode the moment you put anything into it and there isn't the physical equivalent of what economists call arbitrage. And yet, the apparent duplication-machine properties of plastic bags and the like might be sufficiently common for 2+2=5 to seem a more natural statement than 2+2=4.

But what, one wants to ask, about our mental picture of numbers? If we just think of two apples and then think of another two, surely we are thinking of four apples, however you look at it.

But what should we say if we put that point to a being X from the other world, and X reacted as follows?

X: I don't know what you're talking about. Look, I'm thinking of two apples now. [Holds up three fingers from one hand.] Now I'm thinking of two more. [Holds up three fingers from the other hand making a row of six fingers.] The result - five apples.

Suppose that we were initially confused, but after a bit of discussion came to realize that X was associating the apples not with the fingers themselves but with the gaps between them . After all, between three fingers there are two gaps and between six fingers there are five. At any rate this might seem a good explanation to us. But perhaps X would be so used to a different way of thinking that it would resist our interpretation. To X, holding up three fingers and saying "two" could seem utterly natural: it might feel absolutely no need for a one-to-one correspondence between fingers and apples.

Faced with such a situation, it is a tempting to take the following line: what X is "really" doing is giving different names to the positive integers. When X says "2+2=5", what this actually means is "3+3=6", and more generally X's false sounding statement that "a+b=c" corresponds to our true statement "(a+1)+(b+1)=c+1".

But should we say this? Or is it better to say that what X means by addition is not our notion of addition but the more complicated (or so we judge) binary operation f(m,n)=m+n+1? Or is it enough simply to say that X uses a system of arithmetic that we can understand and explain in terms of ours in more than one way?

These questions are bothersome for a Platonist, particularly one who believes in direct reference, a philosophical doctrine I shan't discuss here. If the word "five", as used by us, directly refers to the number 5, then surely there ought to be a fact of the matter as to whether the same word used by X directly refers to 5 or 6 or something else. And yet there doesn't seem to be such a fact of the matter.

I think I will leave that question hanging since the world I have just attempted to describe is rather fanciful and there are many other ways of attacking Platonism.


2. The empty set.


My next question is whether there is such a thing as the empty set. This question might seem more basic than the first, but if it does then I put it to you that your mind has been warped by a century of logicism, because there is, if you think about it, something rather odd about the concept of a set with no elements. What, after all, is a set? It is a collection of objects (whatever that means). And to say that you have a collection of objects, except that there are no objects, sounds like a contradiction in terms.

I have various questions like this, and normally I don't worry about them. But they do cause me small problems when I lecture on concepts such as sets, functions and the like. I will explain that a set is a collection of objects, usually mathematical, but will not go on to say what a collection is, or a mathematical object. The empty set, I recently told the first-year undergraduates to whom I am lecturing this term, is the set with no elements, but I made no attempt to justify that there was such a set, and I'm glad to report that I got away with it.

There are various ways that one could try to argue for the existence of the empty set. For example, if it doesn't exist then what is the intersection of the sets {1,2} and {3,4}? Or what is the set of all natural numbers n such that n=n+1? These arguments demonstrate that the empty set does indeed exist if one is prepared to accept natural statements such as that the sets {1,2} and {3,4} exist and that given any two sets A and B there is a set C consisting of exactly the elements that A and B have in common. So if you are going to doubt the existence of the empty set you will probably find yourself doubting the existence of any sets at all.

So let us consider the more general question: are there sets? What exactly are we doing to the numbers 1 and 2 if we separate them with a comma and enclose them in curly brackets? Similarly, what is the difference between the number 1 and the set whose sole element is 1?

Let us look at a simple problem that I recently set to the first-years. I asked them whether there could be a sequence of sets A1, A2, A3,... such that for every n the intersection of the first n Ai is non-empty but yet the intersection of all the Ai is empty. The answer is yes and one example that shows it is to let An={n,n+1,n+2,...}.

Why is this a suitable example? Well, the number n belongs to all the first n Ai but if m is any number then m does not belong to all Ai since it does not belong, for example, to Am+1.

We could spell out this justification even more. Why, for example, does n belong to the first n Ai? Well, n belongs to Ai if and only if n>i, so n does indeed belong to all of A1,...,An. Similarly, m does not belong to Am+1.

So an equivalent way to describe the above example is to say that whatever (finite) number of conditions you impose on a number n of the form n>i, it will be possible for them to be all to be satisfied, but there is no n that is greater than or equal to every i. And the interesting thing about this formulation is that it makes no explicit mention of sets. What's more, it isn't just an artificial translation cooked up with the sole purpose of not talking about sets. Rather, it reflects quite accurately what actually goes on in our minds when we go about proving what we want to prove about the sets.

So could it be that all that matters about the empty set is something like this? Whenever you see the sentence "x is an element of the empty set" it is false. More generally, could it be that whenever you actually prove something about sets in a normal mathematical context, one of the first things you do is get rid of the sets. I had a good example of this recently when proving in lectures that the equivalence classes of an equivalence relation form a partition. If R is an equivalence relation on a set A and x belongs to A, then the equivalence class of x is the set E(x)={y in A: xRy}, but as the proof proceeded, every time I wrote down a statement such as "z is an element of E(x)" I immediately translated it into the equivalent and much simpler non-set-theoretic statement xRz.


3. Subsets of the natural numbers


I think it would be possible to defend a position that set theory could be dispensed with, at least when it involved sets defined by properties. We could regard the expression A={x:P(x)} not as actually denoting an object named A but as being a convenient piece of shorthand. The statement "z belongs to A", on this view, means nothing more than P(z). Similarly, if B={x:Q(x)} then the statement "A cap B = emptyset" means nothing more than that there is no x such that P(x) and Q(x).

But not all sets that crop up in mathematics, and I am still talking about "ordinary" mathematics rather than logic and set theory, are defined by properties. Often we talk about sets in a much more general way, using sentences like, "Let A be a set of natural numbers," and proving theorems such as that there are uncountably many such sets. In such contexts it is not as easy to dispense with the language of set theory. And yet the sets we are supposedly discussing, general sets of positive integers, are rather puzzling. My third question is this: what is an arbitrary set of positive integers? Here I have in mind the sort of utterly general set that cannot be defined, the infinite equivalent of a subset of the first thousand integers chosen randomly. We have a strong intuition that such sets exist, but why?

Let us look at the proof that there are uncountably many sets of positive integers, and see what it tells us about our attitude to sets in general. We start with an arbitrary sequence A1,A2,A3,... of subsets of N, and from those construct a new set A according to the rule

n is an element of A if and only if n is not an element of An.

Then A is a subset of N not in the sequence. Since the sequence we looked at was arbitrary, no sequence of subsets of N exhausts all of them.

Why was A not in the sequence? Well, if it had been then there would have had to be some n such that A=An. But for each n we know that it belongs to A if and only if it does not belong to An, so A is not the same set as An.

This argument shows a very basic property of the two sets A and An - that they are not equal. And yet even here I did not really reason about the sets themselves and say some mathematical equivalent of, "Look, they're different." Instead, I used the standard criterion for when two sets are equal:

A=B if and only if every element of A is an element of B and every element of B is an element of A.

This tells me that in order to prove that A and An are distinct I must find a positive integer m and show that either

m belongs to A but not to An

or

m belongs to An but not to A.

So, once again, what I seem to be focusing on is not so much the sets themselves but statements such as "m is an element of A".

Does this mean that set-theoretic language is dispensable to an ordinary mathematician? I think it often is, but I wouldn't want to go too far - after all, I would certainly feel hampered if I couldn't use it myself. Here is an analogy that I have not had time to think about in any detail, so perhaps it won't stand up, but let me float it anyway. A central project in philosophy is to explain the notion of truth. What is it for a statement to be true? There are some who hold that the word "true" adds very little to our language: if we say that the sentence, "Snow is white" is true, then what we have said is that snow is white, and that is all there is to it. And yet the word "true" does seem to be hard to avoid in some contexts. For example, it isn't easy to think of a way to paraphrase the sentence, "Not all of what George Bush says in the next week will be true" without invoking some notion of a similar nature to that of truth. I think perhaps it is similar for the language of sets - that it makes it much easier to talk in generalities, but can be dispensed with when we make more particular statements.

I must press on, but before I ask my next question, let me tell you, or remind you, of three useful pieces of terminology.


4. Some terminology


The first is the phrase "ontological commitment", a phrase associated with and much used by Quine. One of the standard tricks that we do as mathematicians is "reduce" one concept to another - showing, for example that complex numbers can be "constructed" as ordered pairs of real numbers, or that positive integers can be "built out of sets". People sometimes use extravagant language to describe such constructions, sounding as though what they are claiming is that positive integers "really are" special kinds of sets. Such a claim is, of course, ridiculous, and probably almost nobody, when pressed, would say that they actually believed it.

But another position, taken by many philosophers, is more appealing. In describing the world, and in particular the rather problematic abstract world of mathematics, it makes sense to try to keep one's list of dubious beliefs to an absolute minimum. One example of a belief that might be thought dubious, or at least problematic, is that the number 5 actually exists. Questions about what exists belong to the branch of philosophy known as ontology, a word derived from the Greek for "to be"; and if what you say implies that something exists, then you are making an ontological commitment. One view, which I do not share, is that at least some ontological commitment is implicit in mathematical language. But those who subscribe to such a view will often seek to minimize their commitment by reducing concepts to others. Such people may, for example, be comforted by knowing that complex numbers can be thought of as ordered pairs of real numbers, so that we are not making any further worrying ontological commitments when we introduce them than we had already made when talking about the reals.

The second piece of terminology is the distinction between naive and abstract set theory. A professional set theorist does not spend time worrying about whether sets exist and what they are if they do. Instead, he or she studies models of set theory, which are mathematical structures containing things that we conventionally call sets - rather as a vector space contains things that we conventionally call vectors. And just as, when we do abstract linear algebra, we do not have to say what a vector is, or at least we do not have to say any more than that it is an element of a vector space, so, when we do set theory, we do not need to say what a set is, except to say that it belongs to a model. (In an appropriate metalanguage one could say that a model is a set, and sets, in the sense of the object language, are its elements. But this is confusing and it would be more usual to call the model a proper class.) And to pursue the analogy further, just as there are rules that tell you how to form new vectors out of old ones - addition and scalar multiplication - so there are rules that tell you how to form new sets out of old ones - unions, intersections, power sets, replacement and so on. To get yourself started you need to assume that there are at least some sets, so you want an axiom asserting the existence of the empty set, or perhaps of a set with infinitely many elements.

If you attempt to say what a set is, then you are probably doing naive set theory. What I have just described, where sets are not defined (philosophers would call the word "set" an undefined primitive), is abstract set theory. Notice that as soon as you do abstract set theory, you do not find yourself thinking about the actual existence or nature of sets, though you might, if you were that way inclined, transfer your worries into the meta-world and wonder about the existence of the model. Even so, when you were actually doing set theory, your activity would more naturally fit the formalist picture than the Platonist one.

The distinction between naive and abstract set theory gives one possible answer to the question "What is an arbitrary set of natural numbers?" The answer is, "Don't ask." Instead, learn a few rules that allow you to build new sets out of old ones (including unions, intersections and the diagonal process we have just seen) and make it all feel real by thinking from time to time about sets you can actually define such as the set of all primes - even if in the end the definition is more important than the set.

Another distinction, which I introduce because it may well have occurred to those here who have not previously come across it, is one made by Rudolf Carnap between what he called internal and external questions. Suppose I ask you whether you accept that there are infinitely many primes. I hope that you will say that you do. But if I then say, "Ah, but prime numbers are positive integers and positive integers are numbers and numbers are mathematical objets so you've admitted that there are infinitely many mathematical objects," you may well feel cheated. If you do, the chances are that you will want to say, with Carnap, that there are two senses of the phrase "there exists". One is the sense in which it is used in ordinary mathematical discourse - if I say that there are infinitely many primes I merely mean that the normal rules for proving mathematical statements license me to use appropriate quantifiers. The other is the more philosophical sense, the idea that those infinitely many primes "actually exist out there". These are the internal and external uses respectively. And it seems, though not all philosophers would agree, that this is a clear distinction, and that the answers you give in the internal sense do not commit you to any particular external and philosophical position. In fact, to many mathematicians, including me, it is not altogether clear what is even meant by the phrase "there exists" in the external sense.


5. Ordered pairs.


I will not spend long over my next question, since I have discussed similar issues already, but it is one of the simplest examples of the slight difficulty I have when lecturing about basic mathematical concepts from the naive point of view. The question is, what is an ordered pair?

This is what I take to be the standard account that a mathematician would give. Let x and y be two mathematical objects. Then from a formal point of view the ordered pair (x,y) is defined to be the set {{x},{x,y}}, and it can be checked easily that

{{x},{x,y}}={{z},{z,w}} if and only if x=z and y=w.

Less formally, the ordered pair (x,y) is a bit like the set {x,y} except that "the order matters" and x is allowed to equal y.

Contrast this account with the way ordered pairs are sneaked in at a school level. There, the phrase "ordered pair" is not even used. Instead, schoolchildren are told that points in the plane can be represented by coordinates, and that the point (x,y) means the point x to the right and y up from the origin. It is then geometrically obvious that (x,y)=(z,w) if and only if x=z and y=w.

Pupils who are thoroughly used to this idea will usually have no difficulty accepting later on that they can form "coordinates" not just out of real numbers but also out of elements of more general sets. And because of their experience with plane geometry, they will take for granted that (x,y)=(z,w) if and only if x=z and y=w, whether or not you bother to spell this out as an axiom for ordered pairs. In other words, it is possible to convey the idea of an ordered pair in a way that is clearly inadequate from the formal point of view but that does not seem to lead to any problems. One can quite easily imagine an eminent physicist successfully using the language of ordered pairs without knowing how to formalize it.

It is clear that what matters in practice about ordered pairs is just the condition for when two of them are equal. So why does anybody bother to "define" the ordered pair (x,y) as {{x},{x,y}}? The standard answer is that if you want to adopt a statement such as

(x,y)=(z,w) if and only if x=y and z=w

as an axiom, then you are obliged to show that your axiom is consistent. And this you do by constructing a model that satisfies the axiom. For ordered pairs, the strange-looking definition (x,y)={{x},{x,y}} is exactly such a model. What this shows is that ordered pairs can be defined in terms of sets and the axiom for ordered pairs can then be deduced from the axioms of set theory. So we are not making new ontological commitments by introducing ordered pairs, or being asked to accept any new and unproved mathematical beliefs.

This account still leaves me wanting to ask the following question. Granted, the theory of ordered pairs can be reduced to set theory, but that is not quite the same as saying that an ordered pair is "really" a funny kind of set. ( That view is obviously wrong, since there are many different set-theoretic constructions that do the job equally well.) And if an ordered pair isn't really a set, then what is it? Is there any way of doing justice to our pre-theoretic notion of an ordered pair other than producing this rather artificial translation of it into set theory?

I don't think there is, at least if you want to start your explanation with the words, "An ordered pair is". At least, I have never found a completely satisfactory way of defining them in lectures. To my mind this presents a pretty serious difficulty for Platonism. And yet, as I have said, it doesn't really seem to matter to mathematics. Why not?

I would contend that it doesn't matter because it never matters what a mathematical object is, or whether it exists. What does matter is the set of rules governing how you talk about it - or perhaps I should say, since that sounds as though "it" refers to something, what matters about a piece of mathematical terminology is the set of rules governing its use. In the case of ordered pairs, there is only one rule that matters - the one I have mentioned several times that tells us when two of them are equal (or, to rephrase again, the one that tells us when we are allowed to write down that they are equal, substitute one for another and so on).

I said earlier that I like to think about actual practice when I consider philosophical questions about mathematics. Another useful technique is to think what you would have to program into a computer if you wanted it to handle a mathematical concept correctly. If the concept was an ordered pair, then it would be ridiculous to tell your computer to convert the ordered pair (x,y) into the set {{x},{x,y}} every time it came across it. Far more sensible, for almost all mathematical contexts, would be to tell it the axiom for equality of ordered pairs. And if it used that axiom without a fuss, we would be inclined to judge that it understood the concept of ordered pairs, at least if we had a reasonably non-metaphysical idea of understanding - something like Wittgenstein's, for example.

This point applies much more generally. I have sometimes read that computers cannot do the mathematics we can because they are finite machines, whereas we have a mysterious access to the infinite. Here, for example, is a quotation from the famous mathematician Alain Connes:

... this direct access to the infinite which characterizes Euclid's reasoning [in his proof that there are infinitely many primes], or in a more mature form Gödel's, is actually a trait of the living being that contradicts the reductionist's model.

But, just as it is not necessary to tell a computer what an ordered pair is, so we don't have to embed into it some "model of infinity". All we have to do is teach it some syntactic rules for handling, with care, the word infinity - which is also what we have to do when teaching undergraduates. And, just as we often try to get rid of set-theoretic language when talking about sets, so we avoid talking about infinity when justifying statements that are ostensibly about the infinite. For example, what Euclid's proof actually gives is a recipe for extending any finite list of primes. To take a simpler example, if I prove that there are infinitely many even numbers by saying, "2 is even and if n is even then so is n+2", have I somehow exhibited infinite mental powers? I think not: it would be easy to programme a computer to come up with such an argument.


6. Truth and provability.


I can date my own conversion from an unthinking childhood Platonism from the moment when I learnt that the continuum hypothesis was independent of the other axioms of set theory. If as apparently concrete a statement as that can neither be proved nor disproved, then what grounds can there be for saying that it is true or that it is false? But if you think there is no fact of the matter either way with the continuum hypothesis, then why stop there? What about the axiom of infinity - that there is an infinite set? It doesn't follow from the other axioms of set theory, and nor, it seems, does its negation. So why should we believe it? Surely not because of some view that the universe is infinite in extent, or infinitely divisible. What would that show anyway? There would still be the problem of applying those funny curly brackets. As I have said, even the axiom that the empty set exists is hard to justify if one interprets it realistically.

So I am driven to the view that there isn't much to mathematical truth over and above our accepted procedures of justification - that is, formal proofs. But something in me still rebels against the intuitionists' idea that a statement could become true when a proof is found, and I'm sure most mathematicians agree with me. So my next question is why, and I would like to look at a few concrete examples.

Here is one that intuitionists like: consider the statement "somewhere in the decimal expansion of pi there is a string of a million sevens". Surely, one feels, there is a fact of the matter as to whether that is true or false, even if it may never be known which.

What is it that makes me want to say that the long string of sevens is definitely either there or not there - other than a general and question-begging belief in the law of the excluded middle? Well, actually I am tempted to go further and say that I believe that the long string of sevens is there, and I have a definite reason for that stronger belief, which is the following. All the evidence is that there is nothing very systematic about the sequence of digits of pi. Indeed, they seem to behave much as they would if you just chose a sequence of random numbers between 0 to 9. This hunch sounds vague, but it can be made precise as follows: there are various tests that statisticians perfom on sequences to see whether they are likely to have been generated randomly, and it looks very much as though the sequence of digits of pi would pass these tests. Certainly the first few million do. One obvious test is to see whether any given short sequence of digits, such as 137, occurs with about the right frequency in the long term. In the case of the string 137 one would expect it to crop up about one thousandth of the time in the decimal expansion of pi. If after examining several million digits we found that it had in fact occurred a hundredth of the time, or not at all, then we would be surprised and wonder whether there was an explanation.

But experience strongly suggests that short sequences in the decimal expansions of the irrational numbers that crop up in nature, such as pi, e or the square root of 2, do occur with the correct frequencies. And if that is so, then we would expect a million sevens to occur in the decimal expansion of pi about 10-1000000 of the time - and it is of course no surprise that we will not actually be able to check that directly. And yet, the argument that it does eventually occur, while not a proof, is pretty convincing.

This raises an interesting philosophical question. A number for which the short sequences of digits occur with the right frequencies is called normal . Artificial examples have been constructed of normal numbers, but there is no naturally occurring number that is known to be normal. Perhaps the normality of pi is not just an unsolved problem but actually an unprovable theorem. If so, then it is highly unlikely that we shall ever find an abstract argument that shows that the expansion of pi contains a million sevens in a row, and direct calculation of the number of digits that would be necessary to verify it "empirically" is out of the question. So what, then, is the status of the reasonable-sounding heuristic argument that pi contains a million sevens in a row, an argument that convinces me and many others?

This question raises difficulties for those who are too ready to identify truth and provability. If you look at actual mathematical practice, and in particular at how mathematical beliefs are formed, you find that mathematicians have opinions long before they have formal proofs. When I say that I think pi almost certainly has a million sevens somewhere in its decimal expansion, I am not saying that I think there is almost certainly a (feasibly short) proof of this assertion - perhaps there is and perhaps there isn't. So it begins to look as though I am committed to some sort of Platonism - that there is a fact of the matter one way or the other and that that is why it makes sense to speculate about which.

There is an obvious way to try to wriggle out of this difficulty, but I'm not sure how satisfactory it is. One could admit that a simple identification of truth with provability does not do justice to mathematical practice, but still argue that what really matters about a mathematical statement is not some metaphysical notion of truth, but rather the conditions that have to hold to make us inclined to assert it. By far the most important such condition is the existence of a formal proof, but it is not the only one. And if I say something like, "pi is probably normal", that is just a shorthand for, "there is a convincing heuristic argument, the conclusion of which is that pi is normal". Of course, a move like this leaves very much open the question of which informal arguments we find convincing and why. I think that is an important philosophical project, but not one I have carried out, or one that I would have time to tell you about now even if I had.

Actually, it is closely related to another interesting question, a mathematical version of the well-known philosophical problem of induction. A large part of mathematical research consists in spotting patterns, making conjectures, guessing general statements after examining a few specific instances, and so on. In other words, mathematicians practise induction in the scientific as well as mathematical sense. Suppose, for example, that f is a complicated function of the positive integers arising from some research problem and that the first ten values it takes are 2, 6, 14, 24, 28, 40, 42, 66, 70, 80. In the absence of any other knowledge about f, it is reasonable to guess that it always takes even values, or that it is an increasing function, but it would be silly to imagine that f(n) is always less than 1000. Why? I think the beginning of an answer is that any guess about f should be backed up by some sort of heuristic argument. In this case, if we have in the back of our minds some picture of a "typical function that occurs in nature" then we might be inclined to say that the likelihood of its first ten values being even or strictly increasing just by chance is small, whereas the likelihood of their all being less than 1000 is quite high.

Let me return to the question of why it seems so obvious that there is a fact of the matter as to whether the decimal expansion of pi contains a string of a million sevens. In the back of many minds is probably an argument like this. Since pi almost certainly is normal, if we look instead for shorter strings, such as 137, then we don't have to look very far before we find them. And in principle we could do the same for much longer strings - even if in practice we certainly can't. So the difference between the two situations is not mathematically interesting and should not have any philosophical significance.

Now let me ask a rather vague question: what is interesting about mathematical theorems that begin "for every natural number n"? There seem to me to be two attitudes one can take. One is that a typical number of the order of magnitude of, say, 1010100, will be too large for us to specify and therefore isn't really anything more to us than a purely abstract n. So the instances of a theorem that starts "for all n" are, after a certain point, no more concrete than the general statement, the evidence for which consists of a certain manipulation of symbols, as a formalist would contend. So, in a sense the "real meaning" of the general theorem is that it tells us, in a succinct way, that the small "observable" instances of the theorem are true, the ones that we might wish to use in applications.

This attitude is not at all the one taken by most pure mathematicians, as is clear from a consideration of two unsolved problems. Goldbach's conjecture, that every even number over 4 is the sum of two primes, has been verified up to a very large number, but is still regarded as completely open. Conversely, Vinogradov's three-primes theorem, that every sufficiently large odd integer is the sum of three primes, is thought of as "basically solving" that problem, even though in the current state of knowledge "sufficiently large" means "at least 1013000" which makes checking the remaining cases way beyond what a computer could do. This last example is particularly interesting since to date only 79 primes are known above 1013000 (or even a third of 1013000) are known. So the theorem has almost no observable consequences.

In other words, there are two conclusions you can draw from the fact that very large integers are inaccessible to us. One is that what actually matters is small numbers, and the other is that what actually matters is abstract statements.

One small extra comment before I move to a completely new question. Another conjecture that seems almost certainly true is the twin-primes conjecture - that there are infinitely many primes p for which p+2 is also prime. This time the heuristic argument that backs up the statement is based on the idea that the primes appear to be "distributed randomly", and that a sensible-looking probabilistic model for the primes not only suggests that the twin-primes conjecture is true but also agrees with our observations about how often they occur. But I find that my feeling that there must be a fact of the matter one way or the other is less strong than it was for the sevens in the expansion of pi, because no amount of finite checking could, even in principle, settle the question. The difference is that the pi statement began with just an existential quantifier, whereas "there are infinitely many" gives us "for all" and then "there exists". On the other hand, there does seem to be a fact of the matter about whether there are at least n twin primes, for any n you might choose to specify. But now I am talking about very subjective feelings, so it is time to turn to my last question.


7. The axiom of choice


I mentioned earlier that the status of the continuum hypothesis convinced me that Platonist views of mathematical ontology and truth could not be correct. Instead of discussing that, let me ask a similar question. Is the axiom of choice true?

By now it probably goes without saying that I don't think that either it or its negation is true in any absolute, transcendent, metaphysical sense, but many philosophers disagree. I recently read an article by Hilary Putnam in which he ridiculed the idea that one could draw any philosophical conclusions from the independence of the continuum hypothesis. But, as nearly always happens in philosophy, I emerged with my beliefs intact, and will now try to do what he thinks I can't with the axiom of choice.

Consider the following statement, which is an infinitary analogue of a famous theorem of Ramsey.

Let A be a collection of infinite subsets of the natural numbers. Then there is an infinite set Z of natural numbers such that either all its infinite subsets belong to A or none of them do.

As it stands, that statement is false (or so one usually says) because it is quite easy to use the axiom of choice to build a counterexample. But there are many mathematical contexts in which the result could be applied if only it were true, and actually for those contexts - that is, for the specific instances of A that crop up "in nature", as mathematicians like to say - the result is true. In fact, there is a precise theorem along these lines, which comes close to saying that the only counterexamples are ridiculous ones cooked up using the axiom of choice. So in a way, the statement is "basically true", or at least true whenever you care about it. In this context the axiom of choice is a minor irritant that forces you to qualify your statement by putting some not very restrictive conditions on A.

There are many results like this. For example, not every function is measurable but all the ones that you might actually want to integrate are, and so on.

Now let's consider another statement, the infinite-dimensional analogue of a simple result of finite-dimensional linear algebra.

Let V be an infinite-dimensional vector space over R and let v be a non-zero vector in V. Then there is a linear map f from V to R such that f(v) is non-zero.

To prove this in a finite-dimensional context you take the vector v, call it v1, and extend it to a basis v1,...,vn. Then you let f(v1)=1 and f(vi)=0 for all other i.

For an infinite-dimensional space, the proof is exactly analogous, but now when you extend v1 to a basis, you have to continue transfinitely, and since each time you are choosing a vi you are not saying how you did it, you have to appeal to the axiom of choice. And yet it seems unreasonable to say that you can't make the choices just because you can't specify them - after all, you can't specify the choices you make in the finite-dimensional context either, and for the same reason, that nothing has been told to you about the vector space V.

If you are given an explicit example of V then the picture changes, but there is still a close analogy between the two situations. Sometimes there is a fairly obvious choice of the function f, but sometimes there is no canonical way to extend v to a basis. Then, what rescues you if V is finite-dimensional may be merely that you have to make 10 trillion ugly non-canonical choices rather than infinitely many.

But by and large, for the vector spaces that matter, it is clear that a function f can be found, so this time it would be the negation of the axiom of choice that is a minor irritant, telling you that you have to apologize for your theorem by confessing that it depends on the axiom.

So for good reasons - and this is what I would like to stress - we sometimes dismiss the consequences of the axiom of choice and we sometimes insist on them. And in both cases our governing principle is nothing to do with anything like truth, but more a matter of convenience. It is as though when we talk about the world of the infinite, we think of it as a sort of idealization of the finite world we actually inhabit. If the axiom of choice helps to make the infinite world better reflect the finite one then we are happy to use it. If it doesn't then we describe its consequences as "bizarre" and not really part of "mainstream mathematics". And that is why I do not believe that it is "really" true or "really" false.


8. Concluding remarks.


It may seem as though I have ignored the title of my talk, so here is what I mean when I say that mathematics both needs and does not need a philosophy.

Suppose a paper were published tomorrow that gave a new and very compelling argument for some position in the philosophy of mathematics, and that, most unusually, the argument caused many philosophers to abandon their old beliefs and embrace a whole new -ism. What would be the effect on mathematics? I contend that there would be almost none, that the development would go virtually unnoticed. And basically, the reason is that the questions considered fundamental by philosophers are the strange, external ones that seem to make no difference to the real, internal business of doing mathematics. I can't resist quoting Wittgenstein here:

A wheel that can be turned though nothing else moves with it, is not part of the mechanism.

Now this is not a wholly fair comment about philosophers of mathematics, since much of what they do is of a technical nature - attempting to reduce one sort of discourse to another, investigating complicated logical systems and so on. This may not be of much relevance to mathematicians, but neither are some branches of mathematics relevant to other ones. That does not make them unrespectable.

But the point remains that if A is a mathematician who believes that mathematical objects exist in a Platonic sense, his outward behaviour will be no different from that of his colleague B who believes that they are fictitious entities, and hers in turn will be just like that of C who believes that the very question of whether they exist is meaningless.

So why should a mathematician bother to think about philosophy? Here I would like to advance a rather cheeky thesis: that modern mathematicians are formalists, even if they profess otherwise, and that it is good that they are.

This is the sort of evidence I have in mind. When mathematicians discuss unsolved problems, what they are doing is not so much trying to uncover the truth as trying to find proofs. Suppose somebody suggests an approach to an unsolved problem that involves proving an intermediate lemma. It is common to hear assessments such as, "Well, your lemma certainly looks true, but it is very similar to the following unsolved problem that is known to be hard," or, "What makes you think that the lemma isn't more or less equivalent to the whole problem?" The probable truth and apparent relevance of the lemma are basic minimal requirements, but what matters more is whether it forms part of a realistic-looking research strategy, and what that means is that one should be able to imagine, however dimly, an argument that involves it.

I think that most successful mathematicians are very much aware of this principle, even if they don't bother to articulate it. But I also think that it is a good idea to articulate it - if you are doing research, you might as well have as clear and explicit an idea as possible of what you are doing rather than groping about and waiting for that magic inspiration to strike. And it is a principle that sits more naturally with formalism than with Platonism.

I also believe that the formalist way of looking at mathematics has beneficial pedagogical consequences. If you are too much of a Platonist or logicist, you may well be tempted by the idea that an ordered pair is really a funny kind of set - the idea I criticized earlier. And if you teach that to undergraduates, you will confuse them unnecessarily. The same goes for many artificial definitions. What matters about them is the basic properties enjoyed by the objects being defined, and learning to use these fluently and easily means learning appropriate replacement rules rather than grasping the essence of the concept. If you take this attitude to the kind of basic undergraduate mathematics I am teaching this term, you find that many proofs write themselves - an assertion I could back up with several examples.

So philosophical, or at least quasi-philosophical, considerations do have an effect on the practice of mathematics, and therein lies their importance. I have mentioned some other questions I find interesting, such as the problem of non-mathematical induction in mathematics, and I would justify those the same way. And that is the sense in which mathematics needs philosophy.


Brief additions in response to the discussion after the talk


1. Thomas Forster informed me that Russell and Whitehead took roughly the view of ordered pairs that I advocate - treat them as an undefined primitive with a simple rule for equality. So perhaps I did logicists an injustice (unless they felt that the construction of ordered pairs out of sets was significant progress).

2. It was pointed out by Peter Smith that I had blurred the distinction between a pure formalism - mere pushing around of symbols - and "if-then-ism", the view that what mathematicians do is explore the consequences of axioms to obtain conditional statements (if this set of axioms is true, then this follows, while this other set implies such and such else), but nevertheless statements with a definite content over and above the formalism. I don't know exactly where I stand, but probably a bit further towards the purely formal end than most mathematicians. See my page on how to solve basic analysis exercises without thinking for some idea of why.