# A dialogue concerning the need for the real number system.

As any mathematics undergraduate knows, in the hierarchy of number systems that goes N, Z, Q, R, C, (that is, positive integers, integers, rationals, reals, complex numbers) the biggest jump in sophistication is that between Q and R. In order to understand the real numbers properly, you are forced to think about limits, whereas the other jumps (including the one from R to C) just involve finite algebraic manipulations. Why do we bother to make this effort? How "real" are the real numbers anyway? Most of the dialogue to follow is between three imagined characters: M, a mathematician who takes the real numbers for granted, S, a sceptic who is not prepared to learn about anything without being absolutely convinced that it is necessary, and U, an undergraduate who has recently learnt basic analysis. Towards the end, a logician, L, tries to sort out some of the mess. (My apologies if, in my ignorance, I put words into the logician's mouth that no self-respecting logician would utter.)

## Part I. The need for specific real numbers.

S. I was looking at a maths book today and saw a whole lot of complicated stuff about ordered fields, Dedekind cuts, Cauchy sequences - whatever they all might be. Surely I don't need to learn all that just to understand simple numbers?

M. Well, the real numbers aren't as simple as they seem.

S. That was clear from the book I saw, but why bother with them at all?

M. What would be the alternative? The Greeks thought they could get away with rational numbers and then discovered that the square root of two was not rational. In other words, if you put all the rational points on a line, you will find that you have left gaps.

S. Of course I won't have.

M. Yes you will - for example, you won't have included root two.

S. That's hardly a gap . It's just a single number missed out.

M. OK, by "gap" I didn't mean a whole interval without any rational numbers. I just meant that you would have missed out some numbers like root two, and many others as well.

S. And what makes you so sure that those numbers actually exist? You talk as though the Greeks discovered an objectively existing object, the square root of two, which was not rational. But in what sense did they discover it? Did they build an exact square and measure its diagonal? Of course not - infinite precision is impossible. It makes no sense of an actual physical object to say that its length is irrational. Maybe Plato believed in a spiritual realm full of perfect geometrical objects, but I don't.

M. Neither do I, and I agree with much of what you say. Indeed in practice it is not just infinite precision that is impossible: we can't measure anything to more than about 15 significant figures. And when computers model physical situations, they content themselves with rational numbers, obtained by truncations. But I think you are misunderstanding the relationship between mathematics and physics. We don't introduce the real numbers because some physical objects actually have irrational lengths. Rather, we do so because they are a uniquely good model for physical length. With the reals everything works neatly - and one can contrast this with a computer simulation, where complicated approximations are made the whole time.

S. So what you are saying is that even if the square root of two doesn't have a direct physical existence, it is nevertheless a very convenient mathematical construct that allows us to talk about lengths of diagonals in an economical way?

M. Yes.

S. Well, I agree with that, but I don't think it's a justification for the entire real number system. Can't you just take the rationals plus a few other important numbers like the square roots of two, three, five and so on?

M. It depends what you mean by "and so on".

S. All right, I'll take all possible roots of all rational numbers.

M. That won't do, because for example the square root of two plus the square root of three isn't itself a root of anything.

S. Fine, put that in too, and all sums and products of roots of numbers.

M. What about roots of the things you end up with? For example, something like (21/2+31/2)1/2.

S. Yes, put them in too. In fact, put in any number you can make from rational numbers using addition, multiplication and the taking of roots.

M. And you think you could live with that as your number system?

S. I don't see why not.

M. Well, for a start you can't even solve all polynomial equations in that system. It's not at all obvious that you can't, but in fact there are polynomials of degree five with integer coefficients that do not have solutions that can be built out of rational numbers using just addition, multiplication and taking roots.

S. Isn't that something to do with Galois?

M. Yes. The insolubility of the quintic was actually proved by Abel, but Galois greatly extended Abel's discovery so that one could analyse individual polynomials in a completely systematic way.

U. Hello. Am I interrupting?

M. Not at all. I'm trying to persuade S here of the usefulness of the real numbers. I think he's about to concede that one needs at least the algebraic ones.

S. What are those?

M. They are all solutions of polynomial equations where the polynomials have integer coefficients.

S. Yes, I'm happy with those, though a question does occur to me. If there are quintics you can't solve in an obvious way, why are you so sure that they have solutions?

M. Maybe U can answer that one for us.

U. Indeed I can, as we did that recently in lectures. Consider the quintic

P(x)=ax5+bx4+cx3+dx2+ex+f.

Without loss of generality a is positive, and then it's not hard to show that when x is large and positive, so is P(x), and when x is large and negative, so is P(x). So, by the intermediate value theorem, somewhere in between there must be some x where P(x)=0.

S. You're leaving me behind. What's the intermediate value theorem?

M. I won't give you a precise statement for now, but I think I can convince you in this instance. Just think of the graph of the polynomial P we're talking about. It is continuous, in the sense that it doesn't make sudden jumps. If for some values it is below the x-axis and others it is above, then in between it has to cross somewhere, because it doesn't jump, and where it crosses you have a solution of P(x)=0.

S. That sounds pretty convincing. So let's include all algebraic numbers then. In fact, I'd have been happy to grant you that long ago. It's all this strange theoretical treatment of real numbers that I don't go for.

M. I could, I suppose, point out that e and pi are not algebraic, but let me first try to explain why a more abstract, theoretical approach to numbers is needed. Results like the intermediate value theorem aren't true for the rational numbers - for example, the graph of x2-2 doesn't cross the x-axis at a rational point - and when you start trying to justify your intuition that it is true, you find yourself inventing the real numbers.

## Part II. The need for a general theory of real numbers.

S. That sounds interesting, if a little hard to imagine.

U. I'm interested in that as well. Are you saying that defining the real number system is in some sense forced?

M. Yes. I mean, even a basic concept like continuity doesn't make proper sense without the real numbers.

U. Why not?

M. Well, take the function f defined on the rational numbers by f(x)=1 if x2< 2 and f(x)=0 otherwise. This function clearly has jumps at plus or minus root two (not that I am saying that those are rational numbers) and yet, according to the definition of continuity, f is a continuous function.

U. Hmm, let's see. If x is a rational number then x doesn't equal root two, so x is contained in a small interval where f is constant. So f is continuous at x. True for all rational x, so f is continuous.

S. What on earth are you two talking about? The function you've defined has two whopping great jumps and you're saying it's continuous?

M. It looks strange at first, but the point is that continuity is defined as a local property. Intuitively, a function f is continuous at x if f(y) is close to f(x) whenever y is close to x. Then we say that it is continuous if it is continuous at every x.

S. Well, I'd need to think about that one, but all I can say is that if your definition of continuity makes the function you defined earlier a continuous one, then it's not a very good definition.

M. That's what you might think, but actually it turns out to be an extremely good definition. It doesn't quite coincide with our intuitive notion of what a continuous function should be - something like a function whose graph you can draw without lifting your pencil off the paper - but it does a pretty good job in that direction, and over a hundred years have shown that it is in fact the "correct" formalization of that idea.

U. Hang on, I think S. has a point here. The definition of continuity is set up for the real numbers, so it hardly counts as an objection to the rational numbers that it gives strange results there. How do you know there isn't a formal definition that works better for the rationals?

M. Such as what?

U. Well, the trouble we had with the jump at root two was that we weren't allowed to say what we all knew to be the case - that the function has a jump at root two. We can look at a graph and this jump stares us in the face. Couldn't one identify this jump by saying that on one side the function is big and on the other side it is suddenly small?

M. And what do you mean by suddenly?

U. Something like that for any d > 0 there is a pair x,y such that |y-x| < d, and yet f(x)=1 and f(y)=0. So x and y are close, while f(x) and f(y) are not close.

S. I wanted to say something like that. You can still have a jump even if there isn't actually a rational point where the jump occurs.

M. So what exactly is your definition?

U. There seem to be many possibilities. If we try to avoid focusing on a single point (because the point we want to look at might be irrational) then we could just say that f is continuous if for every e > 0 there exists d> 0 such that |f(x)-f(y)|< e whenever |x-y| < d.

M. What you have given is indeed a rigorous definition, but it's of uniform continuity, which is subtly but importantly stronger than continuity itself.

U. So you're saying that there are continuous functions that that definition wouldn't count as continuous?

M. Yes, there are lots. For example x2 defined on all the rationals.

U. Oh yes. All right, here's another definition using sequences. One would like to say that f is continuous if f(xn) tends to f(x) whenever xn tends to x, but that runs into the usual problem that the x in question might not be rational. But can't we just talk about sequences that ought to be "convergent" without talking about their limits? Our lecturer told us a good way of doing that the other day - talk about Cauchy sequences. This seems to give a good definition of a sequence of rationals that is "convergent but not necessarily to a rational". So how about saying that f is continuous if it maps Cauchy sequences to Cauchy sequences?

M. Doesn't quite work. For example, let f(x) be defined to be 1/x on the set of all strictly positive rationals. That's continuous but the Cauchy sequence (1/n) maps to the non-Cauchy sequence (n).

U. Yes, but doesn't it work at least for functions defined on all the rationals? Suppose f is the restriction to Q of a continuous function defined on R. Then it maps convergent sequences to convergent sequences. But convergent is equivalent to Cauchy so it maps Cauchy sequences to Cauchy sequences. Conversely, if f maps Q to Q (or even Q to R) and takes Cauchy sequences to Cauchy sequences, then we can extend the definition to R as follows. Given a real number x, take a sequence xn of rationals converging to x and then map x to the limit of f(xn). This exists because it is a Cauchy sequence - which in turn follows from the fact that (xn) is Cauchy. Also that value of f(x) is well-defined or we'd be able to put two sequences together and contradict the property we're assuming of f.

M. What you say is right, and works for functions defined on any closed interval of rational numbers. But I don't like it much because you're basically sneaking the reals in through the back door. After all, what is a real number if it isn't a Cauchy sequence of rationals? Going back to the function with a jump at root two, you are suggesting that we identify the so-called "discontinuity" by constructing a sequence of rationals that converges to root two and perhaps alternates between being bigger than root two and smaller, so that f of that sequence alternates between 0 and 1. Then you pretend that root two doesn't exist and merely comment that your sequence is Cauchy. But that just looks like an artificial way of not saying what is really going on.

S. Well I think that when you said f was continuous on the grounds that you weren't allowed to talk about the place where it jumped, you were artificially not saying what was going on.

U. Here's another definition. Given a nested sequence In of closed intervals of rational numbers with lengths tending to zero, look at their images f(In). For f to be continuous one would want the intersection of the f(In) to be either empty or a singleton.

M. I don't like that for the same reason. You're just identifying a real number - the intersection of the In - without admitting it.

S. If I may say so, there's something very strange about your position. You take the highly artificial and non-canonical step of defining a real number as an equivalence class of Cauchy sequences of rationals, or of suitable sequences of nested intervals, and then accuse us of being artificial when we don't get excited by your definition. Which is more artificial? To give a simple definition of continuity in terms of Cauchy sequences or to get so carried away that you start to believe that Cauchy sequences are real numbers?

U. Anyway, what about going back to the first thing I said? I've just remembered that all continuous functions are uniformly continuous when you restrict them to a bounded closed interval. So why not define a function f from Q to Q to be continuous if its restriction to any bounded closed interval is uniformly continuous? I.e., what I said before is fine as long as you look at finite pieces of Q. And now I don't see where I am implicitly talking about real numbers.

M. Hmm. I think you may be right. OK, how would you define differentiability?

U. Well, let's look at a function that is, technically speaking, differentiable, but which "oughtn't" to be. How about f(x)= max {x2-2, 0}? I want to say that this function f isn't really differentiable because it isn't approximately linear round root two. To demonstrate this, I could easily construct a pair of rational sequences (an) and (bn) tending to root two from below and above respectively in such a way that f(bn)-f(an)/(bn-an) doesn't converge. But then you'll probably tell me that I'm implicitly talking about the reals when I introduce these sequences.

M. Yes indeed. And if you try to copy the uniform-continuity idea you will run into the difficulty that not every differentiable function is uniformly differentiable, even on a closed bounded interval.

U. What does "uniformly differentiable" mean?

M. Differentiable at x says that for every e> 0 there is a d> 0 such that for all y with |y-x|< d we have f(y)=f(x)+(y-x)f'(x)+c, where c is at most e|y-x|. So to make it uniform, we can ask for d not to depend on x. That is, for all e> 0 there exists a d> 0 such that for all x,y we have the inequality |f(y)-f(x)-(y-x)f'(x)|< e|y-x| whenever |y-x|< d. But this isn't a great definition because it's equivalent to saying that (f(x+h)-f(x))/h tends to f'(x) uniformly as h tends to zero. But if f is differentiable then the functions (f(x+h)-f(x))/h are continuous, so f' is continuous, and we see that uniformly differentiable functions are automatically continuously differentiable.

U. I don't like the definition because it requires you to know in advance what the derivative f'(x) is.

M. In that case, you could say something like this. "Differentiable" means "locally approximately linear". What does that mean? Well, if f(x)=u and f(y)=v and z=ax+by for some a,b> 0 with a+b=1 then f(z) ought to be close to au+bv, the value it would take if f were exactly linear. So define f to be uniformly differentiable if for every e> 0 there exists a d> 0 such that whenever |y-x|< d and z=ax+by is a convex combination of x and y we have |f(z)-af(x)-bf(y)|< e|z-x|.

U. That's much nicer. So what's your example of a function that's differentiable but not uniformly so?

M. Any old sin(1/x)-ish thing will do. Let's take the old favourite f(x)=x2sin(1/x) away from 0 and f(0)=0. That wiggles so much near the origin that it easily fails both the above definitions.

U. So are you suggesting that we need the theory of real numbers in order to be able to talk about the differentiability of silly functions like x2sin(1/x)? Why not just talk about uniform or continuous differentiability instead?

M. My feeling is that you would have to give up quite a lot if all your differentiable functions were required to be continuously differentiable. But rather than explore that question I'd prefer to go back to the intermediate value theorem, which in my view gives a much more compelling justification for the real numbers.

## Part III. The Intermediate Value Theorem.

M. Let's take stock a little. Are we all agreed that the rationals on their own are a bit restrictive because there are plenty of useful numbers such as root two, e and pi that are irrational?

U. and S. Yes.

M. All right. Now if we examine why it is that we believe in these numbers we'll find that there are certain methods we like to use to make new numbers, and if we then make the innocent-seeming assertion that these methods are actually valid, we'll find that we are committed to the entire real number system.

S. As I said before, that sounds interesting - but I'll believe it when I see it.

M. Well, it's very simple. Let's take the square root of two for example. What is it? It's the (positive) number that squares to two. And why are we confident that there is such a number? Because of a very useful theorem known as the intermediate value theorem.

S. You keep mentioning that. Are you now going to tell me what it says?

M. Perhaps U. would like to do that for us.

U. All right. It says that if f is a continuous function, a< b are real numbers and f(a)=u, f(b)=v, then for any w between u and v there must be some c between a and b such that f(c)=w. In other words, if a continuous function f takes two values then it must take all the "intermediate" values between them as well. More loosely, if you draw the graph of a continuous function and part of it is below some horizontal line and another part above it, then somewhere the graph actually crosses the line.

M. Thank you. And a good example of that is the function f(x)=x2. Since f(1)=1 and f(2)=4 there must be some x between 1 and 2 such that f(x)=2.

S. That sounds reasonable, but why is the intermediate value theorem true?

M. I'm interested that you ask. Most people think it looks too obvious to need a proof.

S. I agree that you can't get from one side of a line to the other without crossing it, but it seems to me to be a big step from that physical statement to one about the abstract realm of numbers. And what worries me is that your argument is going to turn out to be circular. If the intermediate value theorem is true only for the real numbers and not for smaller number systems, then how can you use it to justify the real numbers? You need the real numbers to justify it .

M. I happily plead guilty to that. You are confusing two sorts of justification. In order to prove the intermediate value theorem you need the real numbers. In the other direction, I am using the intermediate value theorem to justify the real numbers in a different sense: it is an extremely useful theorem which we would have to do without if we did without the real numbers.

S. Maybe you're right. But it's a disappointingly theoretical justification. I'd hoped for something more direct.

M. That's because I haven't yet made another important point, which is that the intermediate value theorem is just a dressing up in fancy language of an idea that is very direct and natural. Here's another approach to the existence of the square root of two. You just build up the decimal expansion 1.4142135... , taking in each place the largest number between 0 and 9 that doesn't make the square bigger than 2. This defines an infinite decimal and that number is defined to be the square root of two. Interestingly, this simple construction is very similar to running through one of the proofs of the intermediate value theorem, and when you try to justify the assertion that the resulting number squares to two, you find that you are relying on the continuity of x2 and effectively proving the intermediate value theorem for this special case. This point is discussed more fully here and here .

S. Suppose I accept that. Does it really commit me to the entire real number system, as you said earlier, or just to a few special extra numbers like root two, e and pi?

M. To all of them: if t is any real number, then find a continuous function that has a unique zero at t and it commits you to t.

S. And why can you do that?

M. It's trivial: the function f(x)=x-t will do, for example.

S. I can't believe you just said that. Are you seriously using the function f(x)=x-t to justify the existence of the number t?

M. All right, point taken. I was being a little hasty. Let's go back to the discussion of root two. The "naive" way of building it is to produce its decimal expansion. When you talk about the real numbers more formally, you abstract from this procedure a general principle, known as the monotone-sequences axiom. It says that if (xn) is an increasing sequence of real numbers and there is some z that is bigger than every xn, then the sequence converges to a limit x. I won't define this precisely, but you can see it happening with the sequence 1, 1.4, 1.41, 1.414, 1.4142, etc., which converges to the square root of two. This general principle is also very natural, and more basic than the intermediate value theorem. It quite definitely commits you to the real number system, because it can be proved that any ordered field for which this axiom also holds must be isomorphic to the field of real numbers.

S. You're blinding me with science a little bit, but if I understand you correctly, we can focus our attention entirely on the question of whether we should accept the monotone-sequences axiom.

M. Yes, because that axiom implies everything else about the reals.

S. I still feel uneasy. Suppose now I take an infinite decimal and try to "justify" it. Let's take pi as an example. I'm sorry if I sound rude, but it seems as though what you are suggesting is the following absurd justification: consider the sequence 3, 3.1, 3.14, 3.141, 3.1415, 3.14159, 3.141592, ... and, bingo, that justifies pi. Well, I can see that we have there a procedure that can be applied to any infinite decimal, but it hardly counts as a justification for pi. To justify pi I'd want to say that it is fundamental for trigonometry, that there is a wonderful formula ei pi=-1, and so on.

M. It's important to distinguish between justification of individual numbers and justification of a number system . I am happy to admit that the vast majority of real numbers are absolutely useless. But together they provide a number system that is generous enough to contain all the useful numbers. Secondly, and more importantly, they provide a context in which the arguments we naturally use to justify the existence of those useful numbers are valid. So if we ever want to define a new number by one of these arguments, we can. We don't have to try to predict in advance what numbers will be useful.

## Part IV. What is an arbitrary sequence?

U. Can I chip in here? I understand what you are saying, and I'm more or less convinced, but I can see possible grounds for objection. What you are saying is roughly this: that when we define an irrational number we have to do a bit of analysis - for example, exhibit a sequence of rationals that converges to it, or a continuous function that has it as a unique root, or a definite integral that has it as its value. It turns out that the first of these methods is always enough, and can even be done with monotone sequences. And now you suddenly say that we should allow all bounded monotone sequences to have limits. But perhaps there is a more restrictive axiom that would give us a much smaller set of numbers but still give us all the useful ones. I think there is some force behind S.'s objection to the "silly" justification for pi. For a typical, generic, completely unspecial real number, it is not easy to think of a sensible justification. Couldn't one say something like that all sensible bounded monotone sequences converge?

M. You can try, but I predict that you won't find it easy.

U. I'm not sure quite what I'm saying, but I am worried that the real number system isn't really defined properly. The picture you have tried to present to us is that we start with the rationals, discover that they are inadequate, introduce some new numbers, see how we did it, and then say that these methods produce the entire real number system.

M. That's a reasonable summary.

U. To put it more formally, you have an infinitary operation, f(x1,x2x3,...)=lim xn, defined for all bounded monotone sequences, and you are saying that the real numbers are the smallest possible system that contains the rationals and is closed under this operation.

M. Exactly.

U. That's fine, if we are happy with the notion of a bounded monotone sequence.

M. Do you find something confusing about that notion?

U. Yes, in a way. I don't know how to articulate what I'm saying, but let me try. Suppose x is a real number - and let's think of it as a typical one, which means for example that it is undefinable. Then x determines for us a bounded monotone sequence - just take its decimal expansion and look at longer and longer truncations. Conversely, that bounded monotone sequence determines x - since x is its limit. But all that seems to tell me is that x and the bounded monotone sequence are co-dependent - given one, I've got the other. But it gives me no clue about which pairs (x, sequence converging to x) exist.

M. Are you suggesting that there are such pairs that don't exist? If so, then I don't know what the words "there are" mean!

U. As I said before, I don't know how to explain myself very well. It just seems that in order to define the closure operation, you have to talk about an arbitrary bounded monotone sequence, and that notion is virtually equivalent to the notion of a real number. So you end up with an apparent circularity. And the effect of that circularity is that it is not really clear in what sense the vast majority of real numbers exist. When you say "every bounded monotone sequence" I want to ask what bounded monotone sequences there are. Why should I accept the notion of an arbitrary bounded monotone sequence? In fact, I really don't think you have justified the real numbers at all - beyond the ones that can be defined in a nice explicit way.

S. Thank you. That is what I was trying to say earlier.

M. We are getting into murky waters here. There is something mysterious about the idea of an arbitrary infinite sequence - even when it just consists of zeros and ones. And yet, a workable theory of real numbers seems to require us to accept that some of them are undefinable, for the simple reason that there are only countably many definitions and uncountably many real numbers. That observation rules out what might otherwise be an attractive idea - to restrict attention to just the definable real numbers.

U. I'm a bit confused about the idea that there are only countably many definable real numbers. What if you put them in a list according to the alphabetical order of their definitions and then apply a diagonal argument? Doesn't that give a new, and defined, number?

M. I told you we were getting into murky waters. Look here for a detailed discussion of that question. But the short answer is that you have to make precise what you mean by a definition. If you do, then you find that the diagonal number you construct is not definable in the way that the numbers in the list were. It's more like meta-definable.

U. But in that case, I don't see why I can't just fix on a decent notion of definition and restrict attention to definable real numbers. If you then try to persuade me that there are numbers I have left out, I'll ask you to define one for me, and you won't be able to. You'll be able to meta-define one, but so what? Why should I worry about that?

M. Because your set will be countable and all the theorems of analysis will become false.

U. But will that be a problem? The "theorems of analysis" you refer to concern objects I don't like and don't have an obvious use for, like "arbitrary" sequences. I'd still be able to do all the useful stuff wouldn't I? For example, the intermediate value theorem would be true for definable continuous functions, and I'm not too worried about any others. In fact, I could end up speaking exactly the same language as you, but mean something slightly different by the phrase "for all x", which for me would mean "for all definable x", whatever sort of object x might be. In fact, I could even say that the reals were uncountable! What I'd mean by this in your terms is that there is no definable bijection between N and the definable reals, which there isn't because then I could apply a diagonal argument and define a real not in the image.

M. I'm convinced that this is going to break down somewhere, but I'm not quite sure where yet.

L. I'm sorry to butt in, but I couldn't help overhearing some of your conversation, and I think I may be able to deal with some of your questions.

M. Oh yes?

L. Well, the most obvious thing to say is that U. is on the way to rediscovering so-called constructible set theory. The details are more complicated than U. is making them, but the basic thought is sound: when we talk about the set of all subsets of an infinite set, it is not clear what we mean. Most mathematicians just don't worry about that, and accept a notion of "arbitrary sequence, whatever that might turn out to be". But a perfectly consistent (at least if normal set theory is consistent) set theory results if one interprets the power set operation as giving you all definable subsets of a given set.

U. How is that any different from what I was saying?

L. The difference is that you are too resistant to what you called meta-definition. At least as Godel set up constructible set theory, you build up sets one level at a time. You have a collection of formulae you are allowed to use, and into them you can put any sets you have defined so far in order to define new ones. Using Godel numbering, you can even do diagonalizations, but if you apply a diagonal argument to a collection of sets at level n then you get a set at level n+1. And that n doesn't have to be a positive integer - it goes on up into the ordinals . If you carry on building levels up to the first uncountable ordinal, then you finally get to show that there are uncountably many subsets of the natural numbers.

U. If you're allowed such a flexible notion of "constructible" then who is to say that you haven't constructed all the subsets of the natural numbers?

L. Not an easy question to answer. The axiom V=L, which states that every set is constructible, is consistent with ZFC. And that's not particularly surprising, given that you can't construct a counterexample!

S. So a moral you could draw is that when most mathematicians talk about the real numbers, they don't know what they are talking about.

L. That's a rather blunt way of putting it. I'd prefer to say that they leave unspecified what they mean by the idea of an arbitrary subset of a set, because they don't need to worry about it. They have the axioms for a complete ordered field, and if you believe in the external reality of mathematical objects then you will have to say that those axioms are ambiguous, because the completeness axiom relies on the notion of an arbitrary sequence, which is not fully explained. But if all you want from your axioms is a set of rules for when you are allowed to deduce one mathematical statement from another, then the axioms for a complete ordered field give this to you.

S. So the theory of real numbers is just a collection of meaningless formal manipulations?

L. No, not meaningless. Even if you don't say exactly what objects the theorems of analysis, such as the intermediate value theorem, apply to, you do at least know that they apply to simple objects such as the function f(x)=x2. If you want to try to do real numbers in a "minimal" way, proving exactly what you need for "ordinary" mathematics and no more, then you make life unnecessarily complicated because you have to worry about what is definable and so on. It is much more sensible for most mathematicians just to leave the notion of an arbitrary subset a bit vague and reason in a formal way. You may then have proved more than you needed to, but why not if that is actually simpler?

M. So, finally, we arrive at the following justification for real numbers. 1. We must go further than just the rationals. 2. When we do so we introduce certain procedures that give us new numbers. 3. Formalizing these, we end up with the monotone-sequences axiom, or something equivalent to it. 4. This axiom is not as precise as it seems, since the notion of an arbitrary monotone sequence, even of rationals, is not precise. 5. There is no need to make it precise, because we know how to reason in terms of arbitrary sequences. 6. That allows us to define the real numbers we have a use for, even if it gives us a lot of junk as well. 7. In fact, we don't really know what junk it does give us, and it's not even clear that it makes sense to ask.