Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
0.999...= 1 (wikipedia.org)
218 points by yurisagalov on April 28, 2020 | hide | past | favorite | 626 comments


There is no proof that will ever satisfy a person dead-set against this. Ever since I brought this home from school as a child, my whole family ribbed me mercilessly for it.

If you tell a person that 3/6 = 1/2, they'll believe you - because they have been taught from an early age that fractions can have multiple "representations" for the same underlying amount.

People mistakenly believe that decimal numbers don't have multiple representations - which, in a way is correct. The bar or dot or ... are there to plug a gap, allowing more values to be represented accurately than plain-old decimal numbers allow for. It has the side effect of introducing multiple representations - and even with this limitation, it doesn't cover everything - Pi can't be represented with an accurate number, for example.

But it also exposes a limitation in humans: We cannot imagine infinity. Some of us can abstract it away in useful ways, but for the rest of the world everything has an end.

I wonder if there's anything I can do with my children to prevent them from being bound by this mental limitation?


It's more fundamental: People seem to have the intuition that the decimal representation of a number is a number. I don't know if it's because decimals resemble the ntural numbers or what, but decimals seem to have a primacy for people that fractions do not. The idea that there's a gap between the symbol for a thing and the thing itself is the stumbling block.


I think this is a very insightful remark. People think that numerals _are_ numbers, and it's hard to explain why this is not the case, because we have no way to talk about specific numbers _except_ by using numerals. But many frequently-asked questions are based in a confusion between numbers and numerals. For example, many beginner questions on Math SE about irrational numbers are based in the mistaken belief that an irrational number is one whose decimal representation doesn't repeat. I've met many people who were just boggled by the idea that “10” might mean ●●, or ●●●● ●●●● ●●●● ●●●●, rather than ●●●●● ●●●●●. A particularly interesting example I remember is the guy who asked what were the digits that made up the number ∞. It's a number, so it must have digits, right? (https://math.stackexchange.com/q/709657/25554)

Computer programmers (and historians!) have a similar problem with dates, and in particular with issues like daylight saving time and time zones. I think a lot of the problem is that again there's no way to talk about a particular instant of time without adopting some necessarily arbitrary and relative nomenclature like “January 17, 1706 at 09:37 local time in Boston”. But when was this _really_? Unfortunately there is no “really”. (“Oh, you mean Ramadan 1117 AH, now I understand.”)


Apparently the New Math (https://en.wikipedia.org/wiki/New_Math) tried to address this kind of issue quite explicitly, by drawing a consistent distinction between numbers and numerals (where a numeral is a symbol that names a number). Reportedly most American math students found this kind of distinction extremely hard to grasp when they were presented with this kind of issue in elementary school. Maybe it would have worked better when they were a bit older.

I wonder if there's a way of teaching this kind of distinction and issue well in a way that would make sense for most students.

I think Feynman said somewhere that the New Math explicitly taught base representation and base conversions, probably as a way of trying to underscore the idea that "123" is a representation of a number rather than a number. Feynman found this to be of questionable value and thought that most students didn't manage to get the point.

Edit: there's a similar issue in linguistics because you have words, phonemes, phones, graphemes, and glyphs. You could say that "dog" isn't a word, but is rather the standard way of writing a particular word in the standard writing system for English (which would sometimes be indicated by <dog> in linguistic contexts). This idea lets you refer to <alright> and <all right> as ways of writing the same word, or <color> and <colour>, or in the case of languages with multiple writing systems <हिन्दुस्तानी> and <ہندوستانی>, or <אַ שפּראַך איז אַ דיאַלעקט מיט אַן אַרמיי און פֿלאָט> and <a shprakh iz a dialekt mit an armey un flot>.


>the mistaken belief that an irrational number is one whose decimal representation doesn't repeat

...which is true for any base-n representation where n is a natural (even rational) number. And that's kind of implied most of the time, so it seems like a useful definition. Where would this lead to problems?


I think the problem is that the definition, while valid, makes you hyperfocus on irrational numbers this way.

Seldom do we prove that a number is irrational by inspecting its decimal expansion. This would be in most cases a very unnatural proof. Since irrationality is a negative property (meaning, one arising out of a negation: the number is not a ratio), most of the time you prove it by contradiction. But people who just know the "digits don't repeat" definition expect us to somehow be able to list all of the digits of an irrational number and show that this infinite list doesn't repeat, which is, of course, an impossible task.


It's _true_, but it's not good as a definition, because it's hard to reason about. It drags in all sorts of contingent facts about base-10 representations that are not usually of interest.

The equivalent property, that a number is irrational if it's not equal to m÷n for any integers m and n, is much simpler. So we use that as the definition, and from that simple and intrinsic definition, we prove the _theorem_ that the decimal representation of an irrational number never repeats.


Yeah, can we have an example of an irrational number whose decimal representation repeats or terminates?

Or a rational number whose decimal representation doesn't repeat?


My phrasing was bad. I should have said "the mistaken belief that an irrational number is * defined to be * one whose decimal representation doesn't repeat”.

Usually we define it like this: an irrational number is one that isn't a quotient of two integers. Starting from that definition, we then prove the _theorem_ that the decimal representation a number repeats if and only if the number is rational.

It's much easier to start from the intrinsic properties, and use those to prove things about the representation, than the other way around. But if you don't distinguish the representation from the thing itself, you can't tell which way you are going.


I am still not in agreement.

The proof that the usual definition is equivalent to the representation is fairly straightforward and easy, no matter which side you picked as the definition. And once the equivalence is established, all other proofs proceed naturally. It therefore matters a lot that we pick one as a definition and know which one we picked, but not so much which one we picked.

Now in fact the quotient definition is by far more interesting mathematically. There is also a clear foundational reason to prefer it, namely that you can easily construct and prove things about the rational numbers long before you construct the real numbers. However it is unlikely that anyone who is confused about the definition of a rational number has a clear understanding of how the reals are constructed, so that is not a particularly important consideration for them.

Furthermore the fact that foundational considerations argue for one construction over another has little bearing on what is pedagogically preferable. As a famous example, the easiest way to rigorously define logarithms is through the integral of 1/x. However explaining logarithms that way to someone who doesn't know them is a pedagogical disaster.


I expect mjd is thinking of irrational bases. The number might still be written as 10, in digits that look decimal.


Thank you! This thread is full of people insisting on something wrong because they were taught incorrectly; an irrational number is defined in terms of integer ratios for a reason.

It's not like those people haven't worked with an irrational base before, either! Radians have an irrational base. When we talk about 2π radians, or 1/4π radians, that's exactly what we're doing.


That is not what I meant at all. (My phrasing was unclear. Sorry for the confusion.) Jordi understood what I meant though.


It was unclear; I understand rational numbers to be ratios, and irrationals to be inexpressible as fractions, and I see an almost direct connection between that and digital representation in rational bases, so it seemed deeply confusing to see a consequence of the definition of rationals being refuted. The only way out was an irrational base.


Maybe I'm misunderstanding, but I think the issue with dates is strictly different. Dates are hard not because time is fundamentally hard, but because there is lots of complexity in human representation of time (different places at different times have had similar but different representations of time). But that's not inherent to time. Ignoring relativity, if everyone throughout time used something like seconds since Unix epoch (or some similarly arbitrary point in time), then writing programs about time would be much simpler.

I think the numerals and numbers issue is more complex because numerals are fundamentally hard to reason about. Even the question "what is a number?" is deceivingly deep.


No, dates are harder than that. Humans use time to coordinate; the representation of time is fundamentally about communication. Only timestamps of events in the physical world are easy (ish). But that's not always, or perhaps mostly, what people are interested in.

When people receive a time, they may (usually) want it in their own time zone, but they might instead want it in the time zone of the entity they're getting the time from, if they're subsequently going to talk to that entity about the time. When they talk about meeting someone else, when they convey the time of the meeting, they usually mean whatever that time means in the place where they meet, which might be different from the current location of either. It might even be different due to political changes around time zones and daylight saving if the meeting is far enough in the future.


You say "no, dates are harder than that", but it sounds like you agree with me exactly based on the rest of your post. Time is hard because human representations of it are varied, complicated, and often arbitrary--not because there's anything fundamentally hard about the math of time (again, relativity notwithstanding). Contrast that with numbers which are inherently difficult to intuit about apart from issues of representation.


Yet the way we represent numbers is also a human construct. 1 = .999... is hard for people to understand because we think and write in base 10 rather than base 3. There's nothing that is fundamentally hard to reason about here.


Oof, I meant to write "numbers are fundamentally hard to reason about", not "numerals". Missed the edit window. The point I was trying to make is that numbers are very often fundamentally counterintuitive irrespective of notation. E.g., is the set of natural numbers larger than the set of decimal numbers? Or the other way around? Or equal? The complexities of time are almost exclusively inferring and converting between (often ambiguous) representations.


Of course in base three, they'll have a hard time with .222...


For example, many beginner questions on Math SE about irrational numbers are based in the mistaken belief that an irrational number is one whose decimal representation doesn't repeat.

How is this a mistaken belief?

Every rational number winds up in a repeating decimal representation and every number with a repeating decimal representation is a rational number. We learn algorithms to go back and forth between the two in elementary school.

Therefore irrational numbers cannot have repeating decimal representations. Conversely numbers with decimal representations that don't wind up repeating cannot be rational and so must be irrational.



Proof: bring up removing time zones, such that there is only one time zone.

The responses are comical, even here on HN, such as not being able to wake up or not knowing when morning is or when meals are. Let’s not forget that time zones are a human invention less than 200 years old.


> Let’s not forget that time zones are a human invention less than 200 years old.

What's the point of that statement? Before the introduction of formal time zones, we had thousands of informal ones, one for each settlement, calibrating noon to the zenith of the sun.


The point is to indicate that humans knew how to wake up, eat food, communicate, and conduct business (even over long distances) before they had clocks much less the relatively recently invented time zones.


200 years ago we didn't need timezones because we couldn't communicate or travel fast enough or often enough for them to matter. With railroads and telegraphs the level of granularity required for "some time 300 miles away" went from days to hours and minutes. No one in the U.S. west wanted to be told "The sun is at its zenith around 8 a.m. for you." For most people talking about time in their day to day lives it's far more useful to communicate a relative time of day with people near them than to communicate an absolute moment in time and do the math to figure out how bright it is outside.


> 200 years ago we didn't need timezones because...

We still do not. China and India are examples of large geographies spanning across a vast amount of longitude and yet each are a single time zone. Time zones are a political entity only, an unnecessary complexity. The absence of time zones will not halt business or communication over long distances.

> For most people talking about time in their day to day lives it's far more useful to communicate a relative time of day

People have done this for thousands of years without modern chronometers. Examples: dusk, dawn, morning, midday, afternoon, evening, twilight.


These two national policies aren't equally difficult: India extends across about 29° of longitude, while China extends across about 61°. The natural size of a time zone is 15° of longitude, so without political considerations India would include about 2-3 time zones, while China would include about 4-5.

I've heard there's some pushback against the Chinese policy in that some people in the west keep an unofficial local time which is widely understood and quoted (though presumably not for things that are sufficiently official or relevant to other regions). Apparently there's currently an ethnic conflict over the time zone status in Xinjiang:

https://en.wikipedia.org/wiki/Xinjiang_Time

Maybe this conflict has now been pushed underground by force?

> In 2018, according to Human Rights Watch, a Uyghur man was arrested and sent to a detention center because he set his watch to Xinjiang Time.


> A particularly interesting example I remember is the guy who asked what were the digits that made up the number ∞. It's a number, so it must have digits, right?

I don't think there are many areas of math where ∞ is a number. In my experience people have a whole other problem with ∞, thinking that it is some sort of huge concept defined globally in math, where it is just a notation shared by various non-mystical definitions across subjects (e.g. bijection-based definition of infinite set, epsilon-based definition of convergence, etc)


Reminds me of Terry Pratchett's description of the mathematical reasoning of camels in Pyramids:

Lack of fingers was another big spur to the development of camel intellect. Human mathematical development had always been held back by everyone’s instinctive tendency, when faced with something really complex in the way of triform polynomials or parametric differentials, to count fingers. Camels started from the word go by counting numbers.


Possibly people are looking at two different symbols and asking "can you show me logically why those are equal." If they're given a definition of "equal" and they still object, that's a different problem.

I have this problem every time I play with group theory again. You get the axioms for a group, which say there is some identity but don't explicity require the identity to be unique. You can easily prove that the identity of a group is unique ... so long as you define "unique" to mean "if element e1 and element e2 are equal, then we say they are the same element."

You could count things differently and say the identity is "not unique", it would just lead to a lot of stupid and un-illuminating consequences.


I used to have the same annoyance about group and category theory; learning some about the type-theory-as-foundations work (particularly homotopy type theory and cubical type theory) has helped w/ this; in that setting, you have several distinct but well-developed notions of equality: propositional equality, which group theory and most math cares about, vs judgemental equality, which is the one that's "obviously true" by the rules of the system.

e.g. 5 = 5 is true under judgemental and propositional equality, whereas x + 2 = 2 + x is only true under propositional


Yeah forcing myself to keep making sure I'm defining "equality" seems to be a really useful way of "breaking my brain" until I understand what an algebraic structure "really is."


I think a lot of people don't think of math in terms of definitions and proof. Math was just something they were taught as kids. And even if they've gotten into more advanced math, i think the 1 = .9... question hits their kindergarten brain and they just say "no" to it the same way they'd say "no" to someone singing the alphabet song in the wrong order.


I think a lot of people see “find me a number between 0.9999... and 1” as no more or less valid (plus, perhaps, no more or less pedantic and dickish) than “ok smarty pants, why don’t you just keep adding 9s until you reach 1, then we’ll talk.”


I imagine that's true sometimes.

On the other hand, hypothetically, if it were the case that we were missing a key definition that's needed for some proof, and someone didn't believe that proof, then maybe we would't quite know enough yet to decide that the person is in mental kindergarten. A better first step for us might be to supply the missing definition.


This is a good point, but an even more basic issue is that the question "what is a number" is a matter of definition. There isn't a "correct" definition of numbers; only one that we've accepted as standard. The accepted definition of a "real number" is actually quite complicated [1], and it's certainly not easy to convey why this complexity is necessary. Other definitions are also possible [2], but nonstandard.

The simplest definition is: a finite decimal ak ... a1.b1 ... bh is defined to be a fraction and an infinite decimal is defined to be a limit. You'd still have to define what a limit is, but that is somewhat more intuitive.

[1] https://en.wikipedia.org/wiki/Dedekind_cut

[2] https://en.wikipedia.org/wiki/Hyperreal_number


Sorry, no.

There are TWO standard definitions of the real numbers. Namely Dedekind cuts and Cauchy sequences. (They are completely equivalent.) The usual decimal representation of a number turns out to be a Cauchy sequence.

The "simplest definition" that you provide turns out to be rather non-simple in practice. Try proving that multiplication is commutative to see the difficulty.

There are plenty of other number systems out there. Try https://en.wikipedia.org/wiki/Surreal_number or https://en.wikipedia.org/wiki/P-adic_number or the complex numbers.

https://en.wikipedia.org/wiki/Surreal_number


Yes, but at the same time it is common for people to insist that 0.999… only "approaches" unity as if it were a series approaching a limit, rather than an unique number. Intuition is a funny thing.


Saying "a number isn't a limit" is true, but it's only really relevant if you're talking to someone who genuinely has no idea what limits are. In actual math the number 0.99... can be defined as the limit of a series.


Every number is a limit, yes, but people think of 0.999… as a series. Not rigorously, of course, but that's a common argument even by people (perhaps especially by people) who have a highschool or even math-minor level understanding of series and limits.


great point. My thought on (1/3 = 0.3333...) * 3 = 1 = 0.999... was that it is intuitively obvious that the "problem" is that we use base-10 for decimals. There is nothing magic or unknowable about the quantity 1/3.

I've often wondered if there is some alternate base or mathematical system entirely that would be "better" in these respects. The thought usually comes up thinking about why pi is such an "ugly" number in base-10 decimal.


I think the problem is that people are often only taught base-10 so they confuse numerals with numbers. If you learn base 2 and then base 16 and then base pi, you start to realize that numbers are something more abstract than whichever numeral system we use to represent them. Rightly or wrongly, the way I imagine integers now is an infinite set of different numerals (base infinity?) such that there is only ever 1 digit (I don't actually have concrete pictorials assigned to those numerals).


Interesting. I find it funny to imagine what if you did have pictorial representations. They'd have more and more complex strokes and knots, and you would also need an infinitely large paper or infinitely precise pen in order to not end up repeating a number sooner or later as you enumerate them.

Come to think of it, this is one way of thinking about the relationship between symbols and geometry.


Who says pictorials have to be 2-dimensional? ;)


Base 12 is a better base overall. It is divisible by more numbers. That is why it is used in various monetary, time and measurement systems.

It's the one issue I have with the metric system... But that ship has sailed :) look up the dozenal society if you're curious how fervent some supporters might be.


The Babylonians though it was 60. (But imagine having to remember additional 50 numeric characters.)


> remember additional 50 numeric characters

The numerals are not distinctly varied like our Arabic numerals. Quite the opposite, they are repetitive and completely systematic and require 80% less effort to remember.

https://commons.wikimedia.org/wiki/File:Babylonian_numerals....


Any other argument for a different base aside, in base 12 you have the analogous “problem” that 0.BBB... = 1. None of the usual bases, equipped with this “...” power, avoid the “problem” (non-unique representation).


I think the confusion comes when mathematicians say "a sheet of paper can be 0.333... (0.3_, ie 0.3 recurring) units long" or something because in experienced reality we can always choose a measure that's rational (in the maths sense). That 1/3 of a meter can be measured in one-third-meter units and be precise and easily written.

Now sure, make a square using those measures and measure the diagonal, like an awkward mathematician - "see, see, we need irrationals!" - but then we can just cut another measure that's exactly that length ... stupid mathematicians!

Yeah representation is not reality.

Now, says the ratio of those two measuring sticks ...


Whatever base you use, it will only be useful for rationals. You can't use any base to significantly improve the representation of pi. In base 7, 3.1 would be a nice approximation, but you can't go beyond approximations.


It doesn't have a terrible representation in base pi ;)


;) As someone fond of integers, I can't advocate base pi.


You can't fix irrationals by changing base. But, there are other approaches.

Continued fractions give very approachable representations for common irrational numbers like e and sqrt(2). While π doesn't have a good continued fraction, it has some very well-behaved generalized continued fractions.

https://en.wikipedia.org/wiki/Continued_fraction


Decimals are easier to compare than fractions. I can't easily work out in my head which one is bigger, 457/790 or 580/924, but I can easily see that (approximately) 0.57848 is smaller than (approximately) 0.62771.

Since the fundamental thing most people want to do with numbers is see which one is bigger, they favour decimal expansions. And since decimals worked so well for fractions, why not use them for everything else?


"Players and painted stage took all my love, And not those things that they were emblems of."

The Circus Animal's Desertion, W. B. Yeats


Similarly, perhaps, many take for granted that the written version of a word, in current standard spelling is the word.


The real issue is that we don't define the real number system before we use it. The fact that 0.999... = 1 is a consequence of a formal definition of decimal numbers. We can create a new definition of decimal numbers that does not satisfy this equation and use it in place of our current one.

Let's imagine a new decimal number system with some vague notion of infinitesimal numbers. We lose some properties we enjoy in our current system but all of those properties still hold for numbers with no infinitesimal part. We can still use our every day numbers like nothing has changed yet we also have a notion to describe infinitesimal values. We can make statements like 1/3 is infinitesimally less than 0.333... and carry on like nothing else has changed.

Now let's sit someone down, start with the rational numbers, introduce Dedekind cuts to define the real numbers and prove that in the real number system that 0.999... is exactly equal to one. Let's also convince them that the real numbers are the unique complete ordered field and that each of these properties are indispensable. Then they will believe that 0.999... should be equal to 1.


> There is no proof that will ever satisfy a person dead-set against this.

Indeed. I've torn my hair out trying to convince smart people with PhDs in hard sciences and had to give up in frustration.

I usually find that the most success can be had by kicking the ball to them immediately and having them define what they actually mean when they say "0.999…". If we're going to debate whether that thing equals another thing, we better make sure we know what we're talking about. Inevitably, this either causes the dead-set person to give up, or give a myriad of definitions that are either meaningless, ill-defined, or causes them to realize that they don't actually know what "0.999…" means (or what they want it to mean). It is hard to have the patience to chase down the consequences of their ill-fated definitions, though.


Ask for a number between .9 repeated and 1


"There isn't one, 1 is the very next number right after 0.999... Checkmate atheists." (In all seriousness I don't think it's a very convincing argument for someone who doesn't buy the proofs -- it requires you to believe and have internalized the idea that there are an infinite number of reals between any two distinct reals, and therefore that any pair of reals with nothing between are the same number. Those seem like bigger logical leaps to me than the simple proofs for someone who hasn't thought about this stuff.)


How about, ask for an integer between 1 and 2. Can't think of one? Guess they're the same number then.


Apples and oranges. For any two different real numbers, there's a number between them. Integers work differently.


This is a bold assertion, and one that is not obviously true, especially in cases like 0.999... and 1.0


Those aren’t different real numbers. That’s the whole point of the conversation.


Obviously saying it is not obviously true is false if 0.999... == 1.0


Can something be true and not obvious?


Obviously not.

Which is the point in using the word obvious, obviously. Namely, using it to feel superior or to not provide a better argument.


When I use the word, the point is to call out the fact that I think it's obvious, so if others don't, they can explain why. Not to forestall any discussion.

Anyone who uses the word differently is doing it wrong.



I don't understand how they're the same number.

I will never accept that they are the same. The difference between 0.9 repeating infinitely and 1 is infinitely small, but it isn't zero.


What is an "infinitely small" number?

Is 9999..... the same as infinity?

What is 1.0 - 0.99999.... = ?

What does it mean to say X is a number, if you can't subtract it from another number and get a number as an answer?


> What is an "infinitely small" number?

What is an infinitely large number?

> What does it mean to say X is a number, if you can't subtract it from another number and get a number as an answer?

By that logic, 0.99 repeating isn't a number at all, and therefore can't be equivalent to 1, because you can't subtract it from 1. So my understanding that they are different is correct.


> > What is an "infinitely small" number?

> What is an infinitely large number?

Neither is a well-defined concept within the standard reals, and completely unnecessary for understanding that 0.999…=1.

> > What does it mean to say X is a number, if you can't subtract it from another number and get a number as an answer?

> By that logic, 0.99 repeating isn't a number at all, and therefore can't be equivalent to 1, because you can't subtract it from 1. So my understanding that they are different is correct.

0.99… is a real number. The sequence (a_n)_{n positive integer} with a_n = 9/10^1 + 9/10^2 + … + 9/10^n has a limit (do you want me to prove that?). 0.99… is defined as that limit. That limit is 1. Therefore 0.99… = 1.

I think you're struggling to grasp the definition here. The defintion of 0.ddd…, where d is an integer between 0 and 9, is the limit of the above sequence with 9 replaced by d. That limit always exists, and the definition is therefore OK. In the case of d=9, the limit is 1.


    0.9      is not equal to 1,
    0.99     is not equal to 1,
    0.999    is not equal to 1,
    0.9999   is not equal to 1,
    0.99999  is not equal to 1,
    0.999999 is not equal to 1,
and so on, ad infinitum.

Saying that if you add enough "9"s it suddenly equals 1.0 makes absolutely no sense to me, and I seriously doubt that anyone will be able to convince me that it does make sense. I've read every single post in this thread and none of you have gotten me any closer at all to believing or understanding that 0.9 repeating equals 1.

Maybe I'm too old to understand this "new math" where all numbers are equal to each other.


This is correct.

No finite representation of repeating 0.9s can equal 1.0

The ask that people accept infinite representations as valid is a big one.


> 0.9 is not equal to 1,

> 0.99 is not equal to 1,

> 0.999 is not equal to 1,

> 0.9999 is not equal to 1,

> 0.99999 is not equal to 1,

> 0.999999 is not equal to 1,

> and so on, ad infinitum.

You are correct about all of these, and all finite strings of the above form.

> Saying that if you add enough "9"s it suddenly equals 1.0 makes absolutely no sense to me, and I seriously doubt that anyone will be able to convince me that it does make sense. I've read every single post in this thread and none of you have gotten me any closer at all to believing or understanding that 0.9 repeating equals 1.

I think it's because you, and a lot of other people in this thread, are turning the question on its head. The difficulty does not so much lie in figuring out whether 0.999… is equal to 1 or not, but rather in what we mean when we write 0.999….

I know I'm repeating myself from elsewhere in the thread, but I'll try again. Try to go through these step by step, and feel free to let me know where you lose the thread.

DEFINITION: A finite decimal representation of a real number is a finite string of the form `a_m a_{m-1} … a_0 . b_1 b_2 … b_n` where each `a_i` and each `b_i` is a natural number between 0 and 9 inclusive (a digit). We say that this finite decimal representation represents the real number

    a_m*10^m + a_{m-1}*10^{m-1} + … + a_0 + b_1*10^{-1} + b_2*10^{-2} + … + b_n*10^{-n}.
Note: The previous definition deals with finite strings and finite sums. I hope we can agree that these are well-defined and unambiguous concepts.

EXAMPLE: The string `12.98` has `m=1`, `n=2` with `a_1=1`, `a_0=2`, `b_1=9` and `b_2=8`. It therefore represents the real number

    1*10^1 + 2*10^0 + 9*10^{-1} + 8*10^{-2}
(duh!).

Within this standard framework, there is no way to ask "what is 0.999…?. It is not yet defined, because we have only defined what finite strings mean. The standard definition for what one means by 0.999… follows. (One can obviously also define these things 0.888…, 1.999…, etc., but let's stick to one case here).

DEFINITION: Let `(c_n)_{n natural}` be a sequence of real numbers (let me know if you need a definition of sequences!). We say that the sequence has the limit x as n tends to infinity (these are words, you don't have to ascribe meaning to "infinity" in that sentence – it's just a word, like "gnarf"!) if, given any real eps>0, there exists an M such that for all m > M, |c_m - x| < eps.

Definition (this is the definition you have to wrap your head around before continuing): Consider the sequence `(c_n)_{n natural}` where `c_n` is the finite sum

    9*10^{-1} + 9*10^{-2} + … + 9*10^{-n}
The string `0.999…` (which we colloquially speak of as "zero point nine nine nine with nines repeating forever") denotes the limit of the sequence `(c_n)_{n natural}` as n tends to infinity (if it exists).

"THEOREM": The limit defining `0.999…` does exist. It is `1`.

PROOF: You can fill this in. If you can't, I'm happy to do it.

As you can see, at no point in the above did feelings or beliefs matter :-)


I don't have a strong opinion or much mathematical knowledge, but an "infinitesimal" number is a thing that most people have heard of even if they're fuzzy on what it is. If there is such a thing, what is the difference between 0.999... and 1 - 1/∞?


Those are great questions that not every system is required to address in the same way. (In a similar vein, +0 != -0 in Java) This is breakdown in notation and/or convention. There is no ground truth, just what's true within the system.


So does this mean that an infinitely small number is zero? As in 1/∞ ?


In real numbers, there doesn't exist such a thing as "infinitely small number" that is apart from zero. Yes, there exists infinitely many numbers between any minisculely small number and zero, but the way they are defined, every single number you can grasp, is finitely small. The "infinitely" small gap is inaccessible. In some other number systems it isn't, but in the standard reals it is.

That means that the "infinitely small" doesn't exist; "smallest apart from zero" doesn't exist either.


You can read about this in any work on nonstandard analysis. ("Nonstandard" is just the name, much like "imaginary" numbers.)

An infinitely small number is zero when projected onto the real number line. If you introduce an infinitesimal quantity to the reals, then for every number there is a unique real number to which that first number is infinitely close (that is, the difference between them is infinitesimal). You can use that real number as a (good) approximation of all the nonstandard numbers in its halo. (As long as you're comparing it to other real numbers.)


> So does this mean that an infinitely small number is zero?

What does "infinitely small" mean?

> As in 1/∞ ?

What notion of division are we talking about here? The division most people expect is that of real numbers. ∞ is not a real number, so you'll have to specify what you mean.


There is no infinitely small number between 0.999... and 1. The difference is 0.000... Not infinitely small, but infinitely zero.


> There is no infinitely small number between 0.999... and 1. The difference is 0.000... Not infinitely small, but infinitely zero.

Zero. Just zero. The difference is zero. 0. Because 0.999… = 1.


You are stating that 0.999... = 1 proves that 1 - 0.999... equals zero. I am stating that 1 - 0.999... = 0.000... proves that 0.999... = 1.

I think people intuitively see that infinitely zero equals zero.


> 1 - 0.999... = 0.000... proves that 0.999... = 1.

If people accept the former, and that the RHS of the former is in fact 0, they've already also accepted that 0.999…=1. I don't see what the discussion is at that point


If people don't accept the former, they can take out a pencil and paper to compute it themselves. After a few digits it will become obvious.

I don't have a direct computation for making the latter obvious, just indirect ones like 1 - 0.999... and 3 x 0.333...


> If people don't accept the former, they can take out a pencil and paper to compute it themselves. After a few digits it will become obvious.

How can they compute 1-0.999… when they clearly have no idea what 0.999… is?


They have to know that 0.999... means you never stop writing nines.

Put 1.0 on top, 0.9 on the bottom. Start subtracting from left to right, and keep writing nines on the bottom as you go to the right. In no time you'll see that the answer is infinite zeros.


> They have to know that 0.999... means you never stop writing nines.

How do they know that that's a real number?


They don't have to know that it's a real number. Knowing that you never stop writing nines is sufficient to perform the calculation.


You're asking them to perform subtraction. They probably know how to do that with real numbers, but problably not with much else. So they'll have to know that they're real numbers (or whatever numbers you are demanding that they be – you're still unclear on this point if it's not actually the reals).


Internally I'm saying they're real numbers. In what I say to the person trying to intuit that 0.999... = 1, I'm deliberately avoiding talking about number systems. I'm assuming this person thinks of numbers as sequences of digits, possibly with a decimal point.


And that is where it all goes wrong.


In calculus, yes.


It's the smallest number bigger than 0.


That doesn't exist. An open interval doesn't have a smallest number.


It does exist. The other poster just clearly showed that it exists by referring to it.

The problem is that if we include such a number in our formal system of math, we quickly find contradictions and the whole system falls apart. So such a number is incompatible with any formal system of math (though I guess you could start building one which does include such a number and see what properties it has).

Herein lies the problem, the people you are talking with do not use a form system. There system of math has something similar to the same flaw of their system of grouping of things, which would include the whole grouping that contains every grouping that doesn't contain itself. People rarely deal in formal systems and thus they can handle completely illogical statements fine as long they are protected from seeing the consequence of it.


You are certainly correct that people arguing the opposite side probably don't have a formal system in mind, but I think the intuition that an open interval in the Reals doesn't have a smallest number is easy to grasp even without any formal training. So you can force them to see the consequences of it through fairly straightforward logical contradictions.

Assume x is the smallest real number greater than 0. Then x/2 is also a real number and is greater than 0 but less than x. Therefore, x can't be the smallest real number greater than 0.


In math, when assuming the existence of something proved a contradiction, we conclude that the thing does not exist. The description may exist "integer between 3 and 4", but there is not described object. A description names a set or a class, and that class can have 0,1, or more numbers.


>In math, when assuming the existence of something proved a contradiction, we conclude that the thing does not exist.

Well only to the extent that you don't want to throw away any of the other axioms. Sometimes you do and there are some fun systems of math, but few have any practicality and those that do are often so advanced that even someone with an undergraduate focus in math can't appreciate those systems.

It is much the same with computer science. I personally enjoyed playing around with formal concepts of computation and adding some extras to see what happens. For example, what happens to a Turing machine if part of the machine can time travel or has access to an oracle. Does this make concepts like time travel inherently contradictory to our notion of computation?

But the practicality of these exercises does not exceed their entertainment value.


Of course it does. It's called the infinitesimal. It's common definition for real number is 1 / infinity: https://en.wikipedia.org/wiki/Infinitesimal

If you've taken Calculus, you've already worked with math that requires the infinitesimal to exist.

It's not a value you can meaningfully write out, but you can't write out pi, e, phi, root 2, 1 / 3 in base 10, root -1, etc. "I can't write it down" isn't a particularly unique property for numbers.


> If you've taken Calculus, you've already worked with math that requires the infinitesimal to exist.

Not at all. Standard calculus uses standard real numbers, for which there is no infinitesimal. One may well speak of infinitesimals as a mental tool when building a mental model for calculus, but those infinitesimals are not actual real numbers (or a well-defined mathematical object at all - in standard calculus).


Correct. This is covered in the article. https://en.wikipedia.org/wiki/0.999...#Infinitesimals


There is no smallest positive infinitesimal either. At least in theories that manage to define those rigorously. And it’s mostly a formal trick anyway; standard epsilon-delta calculus avoids them entirely.

Had you actually meaningfully studied this subject, or did you just link to a Wikipedia article you half-heartedly skimmed one day?


That's a funny way to say, "No, I think you misunderstand. I mean to say no single infinitesimal number exists. Like infinity, the concept exists, but as a literal single number, no."


> There is no smallest positive infinitesimal either.

There is in the surreal and hyperreal number systems. I got that from skimming wikipedia though....


At the very least, don't write "Of course it does". It does not in the real number system.


Why is 0.000... bigger than 0?


It isn't.


Or is it? Say I'm a layman and I decide that in the system of math as I understand it, 0.000... is larger than 0. Yes, if I was going to be completely form with my own system of math I would eventually have to face the problems this introduces and resolve it, but until then I can generally adopt a self contradictory system and continue to live my life unaffected. Much like many people live their whole lives using naive set theory for their understanding of sets.


Then in your system of math 0.999... is also less than 1.

However, basic arithmetic taught to children requires that adding trailing zeros does not change the value of a number. You'll have a hard time doing arithmetic once you change that assumption.


You are correct. This was a rhetorical question to get traderjane to question whether "bigger than 0" really applies here.


strictly bigger than 0


Doesn't exist.


bigger or equal


0.


Ask for a letter between G and H.


You're missing the point. This would be an analogy fit for talking with someone who's looking for an integer between 1 and 2.


0.00...1


> 0.00...1

And what does this mean? I will remind you that for an integer d between 0 and 9, 0.ddd… means the limit of \sum_{i=1}^N d/10^i as N tends to infinity.


  0.000...1 = 1/∞


And what does the right hand side of that mean? Division is commonly defined for a real numerator and a real, non-zero denominator. You are using the common symbol, but with ∞ in the place of the denominator. Since ∞ is not a real number, you must be using a non-standard definition of division, and have to define what you mean.


In some systems, division by ∞ is not defined at all (forbidden), in other it defined as 0, in another systems it defined as non zero.


> In some systems, division by ∞ is not defined at all (forbidden), in other it defined as 0, in another systems it defined as non zero.

Fine by me. Define whatever notion you're using. You can't just throw out non-standard things and expect people to know what you mean.


No, there's no 1. 2OEH8eoCRo0 is exactly right. Subtract 0.999... from 1 and you get 0.000...


    0.999... + 0.000...1 = 1
    0.000...1 = 1/∞
    0.999... = 1 - 1/∞


You're repeating the same wrong thing you said earlier.

It's 0.999... and not 0.999...0

In the same way, it's 0.000... and not 0.000...1.


  0.999... = 0.999...9
  0.999...9 + 0.000...1 = 1
  0.999...0 + 0.000..1 = 0.999..1
  0.000...1 = 1/∞
  0.999...9 = 1 - 1/∞
  0.999...0 = 1 - 1/∞ - 9/∞ = 1 - 10/∞
  If x/∞ = 0, then 0.999...x = 1.
  If x/∞ ≠ 0, then 0.999...x ≠ 1.


Ok, now you're saying that infinite decimals have final digits.


If Universe is infinite, then if we compare you to size of Universe, you are infinitely small, so you don't exists at all. Why I should waste my time?

If Universe is finite, then finite number of elements can make only finite number of combinations, thus this discussion is repeated infinite number of times again. Why I should waste my time again?


Yes, you're wasting your time if you compare my size to the size of an infinite universe. If you really want to waste your time that way, you don't want to use real numbers. On the real number line my size is exactly zero. You need to go into infinitesimals, which are out of place when you're looking at decimal notation, which is only for real numbers.


None of these, except 0.999… and 1 are well-known standard objects in this setting. You have to define what you mean.


I defined it:

    0.000...1 = 1/10^∞ = 1/∞


That's not a definition. Neither 10^∞ nor 1/∞ is defined in any standard system, so you'll have to define those too if you want to use them to define 0.000…1.



You're working with surreal numbers? This is not what people would expect unless it's explicitly stated. In addition, you're likely going to have a hard time explaining surreal numbers to someone who struggles to grasp that 0.999... = 1 in the ordinary reals.


Surreal numbers and infinistemal are simpler to work with when you need to work with infinite series.

Here John Conway explains them: https://www.youtube.com/watch?v=1eAmxgINXrE


Nice notation. I will steal it.


What exactly is the value of the number that ends in a 1 but has an infinite number of 9s before it?


    0.999...1 = 1 - 1/∞ - 8/∞ = 1 - 9/∞


So what you're really trying to say is 0 = 0


You are 1 in infinite Universe, so you are 1/∞, so you are 0.


> It is hard to have the patience to chase down the consequences of their ill-fated definitions, though.

Of course it's hard because in day to day life, even for the vast majority of STEM practitioners, the nuance of the proof that 0.9999... is 1 is not of much utility.

Whenever one sees a 0.999[... to however many digits] one can safely assume it's less than one or perhaps more realistically "almost 1". To say 0.999... with the very specific detail that the 9's go on forever is actually a strange thing to say and outside of most people's experience.

There are simple enough proofs of this that normal folks who paid attention in high school can follow, but I think it has to be framed more as a clever brain-teaser than as a proof.


> Of course it's hard because in day to day life, even for the vast majority of STEM practitioners, the nuance of the proof that 0.9999... is 1 is not of much utility.

Oh absolutely. I'm not expecting STE(no M this time!) practitioners to necessarily be aware of why 0.999…=1 in their daily lives, but I do expect them to have encountered enough situations in their field of expertise where scraping the surface using shallow intuition and gut feeling lead them wildly astray. I'm therefore surprised that they're willing to deny this basic fact to the face of mathematicians. The ones I've interacted with also don't happen to be the types that'll start arguing Anatomy 101 facts with a heart surgeon at a bar, but somehow arguing over basic calculus with mathematicians is fine.


> start arguing Anatomy 101 facts with a heart surgeon at a bar

Well, there's this:

wikipedia.org/wiki/Vaccine_hesitancy

wikipedia.org/wiki/Homeopathy


(STATEMENT OF PERSONAL IGNORANCE [SOPI]: Anyone who actually understands this stuff please correct my mistakes below. Thanks.)

In the real numbers, which are not always simple or intuitive, 0.99... = 1. That's true and I seem to understand the proof.

But the real numbers aren't the only system that might be sitting behind "0.99..." and "1" when I write those symbols down and talk intuitively to people in my family. The reals are just the system we're taught first.

I believe there are other systems (I think the surreals are an example) that work just as well for everyday purposes, but where ( my understanding is that) there are numbers that differ from 1 by a value that approaches zero, yet those numbers are not equal to 1. (I've played with the surreals but only as a hobbyist.)

If you do calculus in these other numbers, I think physically meaningful problems will still yield the same answers. (For example Zeno's Paradoxes are still not an excuse for failure to attend school.) But it isn't a law of nature, I think, that all number systems that can hold 0.999... and 1 must make them equal.


The easiest way I know to explain it is fractions.

  1 / 3 = 0.33333....

  2 / 3 = 0.66666....
So what's 3 / 3?

Some people don't like that one. They might like this one better:

  1 / 11 = 0.0909090909...
What's 10 times that?

  10 * 0.0909090909... = 0.90909090...
So, let's do some addition and let the values zipper together because a nine will always line up with a zero:

  10 * 0.0909090909... + 0.0909090909... = 0.90909090... + 0.0909090909... = 0.9999999999...
However, 10 * 1 / 11 = 10 / 11. And 10 / 11 + 1 / 11 = 11 / 11. So 11 / 11 must be the same as 0.99999....

This works for any repeating fraction. You can do it with 1/7 and 6/7. You add the decimal representations of the numbers up and the value will be 0.99999...

Technically, it works for any repeating fraction in any base. This is great because a lot of fractions are only repeating fractions in certain bases. So if 0.1 in base 10 is a repeating decimal in base 2 (it is) then you can show that (in decimal) 0.1 + 9 * 0.1 will represent (in binary) 0.11111...., which is equal to 1.

The issue is that 1 / 11 + 10 / 11 (in decimal) must still equal 1 in ALL bases. Well, guess what? In Base 11 the decimal looks like:

  0.1 + 0.A = 1.0
And 1.0 in base 11 is 1.0 in any base.


This is basically how we were taught to convert decimal fractions with periodic decimal expansions to regular fractions in school. You can even do that without that lining up of zeroes and nines, just multiply by the period: if x = 0.090909..., then 100x = 09.090909... (we shift the decimal point by two positions), and since the stuff after the point is the same, after subtracting it cancels out: 100x - x = 9.0, and so 99x = 9, from which we obtain x = 1/11.


That 11ths thing is actually really clever. I had never seen that argument before.


A number is just a number, it doesn't approach anything. A series can approach something, but a number can't. In any system where 0.99... is valid notation for a number, it doesn't approach anything.


Sure. Instead of "0.99..." please substitute lim n->inf sum(1..n)(9 times 10^-n).

The point I'm making is that the "obvious truth" 0.99... = 1 that we're all talking about depends on the assumption that we're working in the real numbers.

I claim that the real numbers are not something intuitively obvious to every sufficiently intelligent person; instead they are kind of weird and technical. I go on to claim, though I'm more unsure of this, that the real numbers are not even the only way to make calculus work.


Well, a limit of a series is just a number and doesn't approach anything either. If the series approaches something, we say the limit exists and is equal to that thing.

Anyway, in the surreal numbers you could probably make up a notation where 0.999... actually denotes 1 - ε or something. But I daresay it might not be very useful because then how do you denote 1 - ε/2 or anything else.


We've defined the series we're talking about, and we've defined 0.99... as the limit of that series.

I don't know whether repeating decimals are useful for testing equality of surreal numbers.

All I'm trying to say is we're all talking about anything and everything except the definition "when are two real numbers equal?"

And then we're saying people who don't understand the consequences of that definition are kind of dummies ... while we continue to not actually say what the definition is.


Fair. Maybe the definition is that two real numbers are equal iff there is no real number between them. Which is nonintuitive if you're used to things like natural numbers which have successors. But if you teach people this way of thinking about the real numbers, then arguments like "it's the last number riiight before 1" stop working.


It's smart of you to bring up the surreal numbers: https://thatsmaths.com/2019/01/10/really-0-999999-is-equal-t...


Strictly speaking I brought up the surreal numbers.


Then I guess you're the smart one. My apologies.


Sorry, sorry. I've been trying to bring up nonstandard analysis, and repeatedly getting poked by people saying "what do you think 'calculus' is?" "have you ever heard of a limit?" and so forth. In the process I have apparently become even more ornery than usual.


As we have no idea who you are don’t you think it makes sense we try to figure it out? How one answers the question will vary based on background.


Oh, I figured it out, kind of. [I don't really know what I'm talking about either.]

  1.000...0 = 1
  1.000...05 = 1 + ε/2
  1.000...1 = 1 + ε
  0.999...8 = 1 - 2ε
  0.999...9 = 1 - ε
  0.999...98 = 1 - ε/5
  .
  .
  .


> 0.999...8 = 1 - 2ε

> 0.999...98 = 1 - ε/5

cough


Sure, why not? Different notations for different surreal numbers. But it's not actually a good notation; I wouldn't know how to write 1 + 10ε.


> I go on to claim, though I'm more unsure of this, that the real numbers are not even the only way to make calculus work.

That claim is certainly true. The proof is that calculus (Analysis) exists for the complex number system too. Although I don’t think that’s what you meant and I doubt the complex number system is “more intuitive.” Just out of curiosity, have you heard of real analysis? How do you define calculus?


Take a look at nonstandard calculus:

https://en.wikipedia.org/wiki/Nonstandard_calculus

It is based on the hyperreal numbers:

https://en.wikipedia.org/wiki/Hyperreal_number

Practically speaking, I don't think it buys you anything over traditional calculus/analysis. It's just pointing out that there are alternative approaches to formalizing calculus.


A Riemann integral in complex analysis uses the same definitions as one in real analysis. Derivatives ditto. In the thread we're comparing real analysis to nonstandard analysis.

I'm eager to be corrected if you can tell me something I said that's wrong. I'm not interested in gradually upping the ante with you until it's clear who really has more math background.


I didn’t really see anything wrong in the thread but you use words like “calculus” and I honestly don’t know exactly what you mean. Do you mean the plug and chug methods often taught in high school? Or do you mean the application of a set of theorems derived from the axioms of a given system of numbers? If you mean the former then perhaps we can clear up some misconceptions which are clarified by the latter.


Those number systems do exist, but I'm not sure it's right to say they work just as well for everyday purposes. They work only as long as you use them in a way that reduces to treating them as real numbers, either never computing an infintesimal in the first place or calculating 23 + 6ε and saying "oh that's basically just 23".


Sure. And it's true, 0.99... is equal to 1.

All I'm saying is [SOPI below] it's all a little more technical than the junior high school proof. For example if 23+6\epsilon = 23, then how do I define 23 + 6\epsilon - 23? I can choose different approaches here, but "zero" is going to be pretty inconvenient when I go to do an integral.

[SOPI] Statement of Personal Ignorance. I don't quite know what I'm talking about. If you do know, please step in and help correct me.


What are you integrating over?


I'm taking the limit of any convergent infinite sum of terms each weighted by \epsilon. Although \epsilon itself equals zero, of course the limit of the sum isn't necessarily zero.

So if we go and define some other funky "infinitesimal" \epsilon != 0 in the surreals, we have to be careful. Apparently people have done that kind of thing successfully, but it took a long time after Cauchy for that to happen.


Even in surreal numbers, where there indeed exists numbers "infinitely close" to 1 but smaller than it, 0.9999.. would still be exactly 1. That's a quirk/feature of the decimal representation, not the underlying theory of numbers.

In surreal numbers there's a number called ε a number infinitely close to 0 (but larger than it), so what you would think that 0.9999.. represent is actually written 1-ε maybe?

But there's another number ε/2 that is between 0 and ε; 1-ε/2 is even closer to 1 than 1-ε is. Indeed, there are infinite numbers infinitely close to 1! (and none is really represented by 0.999...)


    0.999... + 0.000...1 = 1
    0.000...1 = 1/∞
    0.999... = 1 - 1/∞
1/∞ is zero or not?


If you accept that

1/∞ = 0

Then you accept that

∞ * 0 = 1

But the definition of 0 is exactly that anything multiplied by it must be 0. So this cannot be true.

To take a more verbal route: you cannot take nothingness and repeat it. Repeating (or multiplying) nothingness (or 0) is fundamentally nonsense.

Programmer explanation: one cannot loop through `null` even once, let alone a large number.


No, you don't have to accept that. In my Analysis 2 course we worked a whole bunch with [0, ∞], i.e. the positive real numbers together with infinity, and we defined 1/∞ = 0 and ∞ * 0 = 0. You lose some of your usual rules of arithmetic, but it gets a lot easier to talk about integrals.


https://en.wikipedia.org/wiki/Surreal_number

Quote:

There are also representations like

{ 0, 1, 2, 3, … | } = ω

{ 0 | 1, 1/2, 1/4, 1/8, … } = ε

where ω is a transfinite number greater than all integers and ε is an infinitesimal greater than 0 but less than any positive real number. Moreover, the standard arithmetic operations (addition, subtraction, multiplication, and division) can be extended to these non-real numbers in a manner that turns the collection of surreal numbers into an ordered field, so that one can talk about 2ω or ω − 1 and so forth.


I'm not quite sure I understand what you're trying to tell me? The example I mentioned is not about surreal numbers.


I recommend watching this lecture about surreal numbers by John Conway: https://www.youtube.com/watch?v=1eAmxgINXrE .


There are some weird rules like that for floating point numbers too.


What on earth is .000...1

If it terminates it’s not an infinite series. In any case, if you add any finite number to .99999... it’ll be equal to 1 + that number.


One way to introduce the idea that a number represented as decimal digits can have multiple representations is to talk about the numbers 1 and 1.0 being exactly the same. And that 1.00 is the same as 1. Just like 0, 0.0, and 0.00 are the same number. Most people would agree at this point.

Then keep stretching the number of zeroes to 0.000... - which, again, is exactly the same as 0.

From there, it is not a huge stretch to be able to go from that 0.000... is another way to write 0, then 0.999... is another way to write 1.


The real question is what do you get if you add:

0.999 ... infinite number of 9s ... 9

and

0.000 ... infinite number of 0s ... 1


> 0.999 ... infinite number of 9s ... 9

> 0.000 ... infinite number of 0s ... 1

Wouldn't these be ill-defined? You can't say "infinite number of 9s" then have the numerical representation terminate with a 9. If the decimal representation terminates, then by definition it isn't infinite.


Whether or not a specific construction is well defined depends on the system you are using. This representation is certainly well definable.

I think your objection here is in the same vein as those who object to the notion that 0.999... = 1.0

For most people, the concept of an infinite representation is not well defined.


> This representation is certainly well definable.

Do you mind defining it more formally for me then or pointing me to explanations/systems that would permit such a definition? I'm not really sure what such a formulation might look like, but then again I'm not nearly well-acquainted enough with mathematical topics past what is commonly taught. Does it involve the hyperreals, surreals, or one of the other systems beyond the "standard" reals as mentioned by other comments in the thread?

> I think your objection here is in the same vein as those who object to the notion that 0.999... = 1.0

> For most people, the concept of an infinite representation is not well defined.

I think (or at least hope) it's a bit more nuanced than that. I understand that some reals with infinite decimal representations can be well defined as an infinite series, and that defining 0.999... using such a series allows other manipulations to be done to complete the proof.

However, adding the concept of an "end" to said infinite series kind of breaks that understanding. The translation to an infinite sum no longer seems to hold, so I'm at a bit of a loss.

It's also somewhat counterintuitive to have an "end" to infinity, but as the rest of the thread shows intuition isn't always reliable for this kind of thing, especially for those who aren't particularly familiar with more detailed bits of math.


When I was a child I was convinced by a pretty simple conversation with my father:

Me: 0.9999... is not the same as 1

Him: Well if it's not the same is it more than 1 or less than 1?

Me: Less

Him: Okay then how much less is it?

At this point I started trying to do 1 - 0.999..., using the methods I'd been taught, and after a few iterations of "borrowing" the 1 I realized the answer was 0.000... which I was pretty convinced was equal to 0.


Hehe... smart man.

Another one is that

1 / 3 * 3 = 1

<==> 0.333... * 3 = 1

<==> 0.999... = 1


"yes but 1/3 does not equal .333... it's just an approximation since there's no perfect way to represent 1/3"


If 1/3 doesn't equal .333... then how much do they differ by?


    (1/3)/∞


In base 3,

    1/10*10=1, 0.1*10=1
What the problem?


I usually say "If and only if two numbers are different, then you can find a number between them". People often accept this axiom. Then, I offer them to find a number between 0.999... and 1.


This works because you have defined what equality means.

We all wanna talk infinity because it sounds more exciting, but I think everybody gets "infinitely close to 1" pretty well intuitively. What they don't get is whether "infinitely close to 1" means "equal to 1". That could happen because these people are stupid.[note] But it could also happen if nobody has defined equality.

[note]for example even highly educated people maybe don't listen, which is functionally a lot like being stupid.


is there any difference between a black hole and nothing? (somewhat joking but I was thinking of a physical analogy of the limit approaching zero)


A black hole has mass. Nothing has no mass.

(also a somewhat joking answer)


Yes, there's a difference.

Firstly, though, there are multiple different types of black hole, from the theoretical to the astrophysical. We must narrow your question down to have any hope of a good answer.

The simplest theoretical black hole, the Schwarzschild black hole, has one variable -- the central mass -- which must be positive.

If we set the central mass in a Schwarzschild spacetime to zero, then we have Minkowski spacetime: no curvature, no horizon, no black hole.

The Schwarzschild spacetime is completely empty except for the central mass, which is constant and located at an infinitesimally small point at all times. The Minkowski spacetime is completely empty everywhere and at all times.

The symmetries of Schwarzschild and Minkowski spacetime are different, and if one were to probe the spacetimes in question with a Synge curvature detector [1], we would quickly discover which we were probing if our probes happened to be placed close to the central mass, and eventually if they were placed far from the central mass.

If one placed the probes infinitely far from the central mass, it would take an infinitely long time to distinguish the presence of the central mass (which makes spacetime non-Minkowski); but these spacetimes are eternal anyway, so that's OK. So that's almost a "yes" to there being a theoretical black hole analogy between (1-) 0.999... and (1-) 1.

I would not call this a physical analogy since neither Minkowski spacetime nor Schwarzschild spacetime is at all physical. Nature is full of stress-energy (gas, dust, ...) any of which breaks the vacuum condition of these spacetimes, there seem to be a lot of astrophysical black holes at the centres of galaxies and individual/binary stars that have become black holes, and even a two-black-hole universe is markedly different than a Schwarzschild spacetime. Additionally, these astrophysical black holes are not eternal, unlike Schwarzschild. In particular, the stellar mass ones were once stars, and the galaxy-centre ones at least had less mass in the past. These last conditions alone are substantial deviations from Schwarzschild that are even more obviously not Minkowski (e.g. if you put probe finitely but sufficiently far away, you could see an image of the radiant precursor star rather than the black hole!).

Finally, in our physical galaxy the answer to your question is a big "yes!". The observed orbits of these stars [2] would be noticeably different if the central mass in the Milky Way's central parsec were anything but a black hole, and would be even more different if that central mass were not there at all.

- --

[1] Synge, J.L., _Gravitation. The General Theory_, ch. XI §8, "A five-point curvature detector".

[2] http://www.astro.ucla.edu/~ghezgroup/gc/animations.html and http://www.astro.ucla.edu/~ghezgroup/gc/blackhole.html


Why not 0.00...1?


This is not an infinite decimal. The digit 1 is somewhere out there.


But this is not a compelling argument to somebody in this situation. While correct, it feels identical to saying "it just is".


The whole numerical representation scheme really is just a man made system. If you dig deep enough the veneer disappears. This is especially noticeable when you start to see things like numbers that are finite in decimal but have infinite repeating patterns binary.


Though you could treat it as an infinite series in a similar way:

0.9 -> 0.99 -> 0.999 -> ... -> ?

0.1 -> 0.01 -> 0.001 -> ... -> ?


The first is an infinite series:

  9/10 + 9/100 + 9/1000 + ... + 9/10^n + ...
The second is not?!


I mean you could if you want to:

1/10 + -9/100 + -9/1000 + -9/10000 + ...

But I just meant them as series, not necessarily as sums.


Or

  1 - 9/10 - 9/100 - ...
or

  1 - ( 9/10 + 9/100 + ... )
So it becomes circular:

  0.00...1 = 1 - 0.999...

(In math a series is a sum:

https://en.m.wikipedia.org/wiki/Series_(mathematics)


Woops, you're totally right, I think I should've said "sequence".


Because unlike 0.99..., 0.00...1 is not a Real number.

The decimal representation of a Real number has to be indexed by Natural numbers, i.e. every decimal digit[n] has a well-defined index n which is a Natural number. Infinity is not a Natural number, so 0.00...1 is not a Real number either.


Then the question is, is 0.00....1 equal to zero? We use the definition of equality above and we say yes.

EDIT: The number above seems well defined. It's lim n->inf (10^-n). That's zero.


It is not.

The decimal representation of a Real number has to be indexed by Natural numbers, i.e. every decimal digit[n] has a well-defined index n which is a Natural number. Infinity is not a Natural number, so 0.00...1 is not a Real number either.


This is usually described mathematically by saying that the representation of a decimal has to be countable, whereas the number 0.000...1 is not a countable representation.

I will say though that if your explanation of 0.999... = 1.0 requires that you explain the distinction between countable and uncountable infinities, that's a big ask for most lay people.


Could you please explain a little more what "not a countable representation" means?


A countable set is one where you can reach any element in a finite amount of steps. See:

https://en.wikipedia.org/wiki/Countable_set


If you're saying we never get from the initial 0 to the trailing 1 in a finite number of steps, that's true and that's what DavidVoid is saying. But I haven't made the list of digits uncountably large by adding one element. Instead I put that element in a transfinite position in the ordering and I have to watch out for weird consequences.

I'm no expert, but countability doesn't depend on how the set is ordered. It depends on whether the elements can be placed in 1:1 correspondence with the integers. 1,0,0,... has a countable number of elements, and so does 0,0,0,...,1. They can be put in 1:1 correspondence with each other. This definition of countability is described in your link.

I meant to ask, does "countable representation" some kind of detailed definition that I can look at?


It's not the set of digits you use which is uncountable, it's the representation itself.


I think it's just another way to word DavidVoid's explanation a few comments up.


Ok. (a) I think you're saying that 0.00 ... 1 is not a real number. (b) Do we agree that lim n->inf 10^-n is a real number? (I.e. zero?) (c) Then you're saying that limit in "(b)" is not a reasonable definition of the string "0.00 ... 1". Is that correct?


That is correct.

The limit of lim n->inf 10^-n is exactly 0, it is not 0.00...1.


Since we have no common definition of what 0.00...1 might possibly mean, let's say we agree.


0.0000... is the repeating decimal representation of zero.


What is 0.00...1 times 34?


0.00...34


Is 0.00...1 times 34 equal to 0.00..1 times 3.4?


I think the answer to that question depends on the axioms you are using.


That's a great approach


I didn't grok infinity until I started thinking in terms of verbs rather than nouns. As a static number, the concept of infinity makes no sense; but once reimagined as a process (start counting up from 1, and never stop), all apparent paradoxes disappear.

This is the inverse problem: it could just as easily be reframed as 0.000...0001 = 0. Defined as static nouns (does such a thing exist in nature?), it's seemingly paradoxical, and fascinatingly debatable in a "is a hot dog a sandwich" sort of way. But reframe it as a process (or as code), and all confusion disappears: for how many loops would you like to proceed? If you never stop, 0.99999... clearly approaches 1, without ever reaching it, and asking if they're the "same" is as academic as asking if the Ship of Theseus is the same ship, or if an electron is the same entity from one picosecond to the next.


> it could just as easily be reframed as 0.000...0001 = 0

But it can't be, because there's nothing after "0.000..."; that ... goes on infinitely. It's literally "0s forever, never stopping". It's not a process of "keep adding 0s", it's the end result of never adding 0s. It's not a process, it is a noun.


My argument is that "noun" is a purely human abstraction, and that phenomena that act noun-like in nature are at best snapshots of iterative processes. Within the bounds of the noun abstraction, sure, I'll cede that point.

But if one eschews that abstraction and looks at it purely as a process (I want to render 1/3 in decimal notation, then multiply that decimal notation by 3), there is always that niggling 0.000...1 remainder at every snapshot. The "never stopping" bit is what smuggles verbiness into the "0.999..." noun, while simultaneously pretending it's a static value.


Regardless of whether or not they are human abstractions, a process (making cheese) and a noun (cheese) are two distinct things.


0.000...1 can be written as 1/inf, which has sense in Surreal Numbers math.


You could just as easily say that 0.000…54234 can be written as 1/∞. Surreal Numbers is a bit of a detour in this case. The premise of the idea that 1 - 0.999… could be written 0.000…1 is the mistaken concept that there is some point "after an infinite number of steps" where the expansion of 0.999… stops and you can leave the remaining 1. The expansion never stops and there is no final remainder. The result is 0.000…, which is more typically written as 0. Plain zero, not an infinitesimal.

Personally I find the following algebraic proof to be the most approachable:

          x = 0.999…
          x = 0.9 + 0.0999…
          x = 0.9 + (0.999… ÷ 10)
          x = 0.9 + (x ÷ 10)
    x - 0.9 = x ÷ 10
    10x - 9 = x
     9x - 9 = 0
         9x = 9
          x = 1
     0.999… = 1


    x = 0.999...
    x = 9/10 + 9/10^2 + 9/10^3 ... 9/10^inf
    x = 9/10 + (9/10 + 9/10^2 + 9/10^3 ... 9/10^(inf-1))/10
    x = 9/10 + (x - 9/10^inf)/10
    x - 9/10 = (x - 9/10^inf)/10
    10x - 9 = x - 9/10^inf
    9x - 9 + 9/10^inf = 0
    9x = 9 - 9/10^inf
    x = 1 - 1/10^inf
    x = 1 - 0.000...1


    x = 9/10 + 9/10^2 + 9/10^3 ... 9/10^inf
    x = 9/10 + (9/10 + 9/10^2 + 9/10^3 ... 9/10^(inf-1))/10
This is exactly the issue I was referring to. You're assuming the sequence stops "at infinity" but infinity is not a concrete number of steps, it's the absence of any end condition. Subtracting one step from "no end condition" is nonsense. The sequence (0.999… - 0.9)×10 does not end earlier than 0.999…; these are exactly the same sequence, repeating 9s without end. The difference between them is zero in every digit, with no trailing 1.


The sequence doesn't end earlier, but it does start later by one element, so they are not exactly same sequences. One infinite sequence has one more element than another infinite sequence, so difference is 1/inf.


No, they start at the same element (they're both 0.999… and thus start with 9/10) and have the same (infinite) "number" of elements. If you lined them up digit by digit there is never a point where one digit is a 9 and the other is a 0.


Incidentally, this confusion is part of the reason why actual infinite sequences are written with a trailing ellipsis (9/10 + 9/10² + 9/10³ + …) and not a final term involving infinity. There is no final term—not even 9÷10^∞. So the correct way to write the separation of the leading term in your formula is:

    x = 9÷10 + 9÷10² + 9÷10³ + …
    x = 9÷10 + (9÷10 + 9÷10² + …) ÷ 10
Note that the number of elements in the sequence is the same no matter how many leading terms you write, so long as the pattern doesn't change. The notation { 1, 3, 5, 7, 9, … } and { 1, 3, 5, … } both refer to exactly the same set; the first notation is merely a bit more verbose. Similarly, the parenthesized portion of the second formula above is exactly equal to x, despite being written with two explicit leading terms rather than three.


I find that "infinite as an endless process" concept intuitively very heplful as well. However, reading Gödel, Escher, Bach[1] showed me that there's another, more static logical interpretation of infinity which also comes handy.

In an infinite process, you can always take "one more step" to create the item after that. Let's assume there exists a "final" mathematical object that goes after every finite item in the generation process (i.e. it is higher than any item in the list, or smaller, or has happened after all of them)... This object doesn't really belong to the infinite generative sequence, it's an item outside all of them, and can't be reached by completing the sequence; it merely exist outside the process and happens to have the property of "dominating" all the items in it.

You can assume the existence in the same way you assume the existence of a number which is the square root of -1, or how you define triangles whose angles add up to more or less than 180 degrees. If you do that, this object "at the infinite" can be formally defined and treated axiomatically to find out what mathematical properties it possesses.

[1] https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach


This thought can be made much more concrete when talking about sets. Clearly we can understand a collection of things as a set. Clearly numbers are things, so we can talk about the set of all numbers. But how many elements are in this set? A clever answer could be: It has as many elements as there are natural numbers! As subsets are also a thing, we could ask next how many subsets the set of all numbers has. A clever, but very problematic answer could be "as many as there are natural numbers!"


That’s actually not true, the power set of the naturals is uncountable.


As I said, it would be a very problematic answer. But only if you properly define when two non-finite sets have the same size, you can lead this to a contradiction. Infinity can be understood intuitively, extending it to the cardinal numbers not.


> As a static number, the concept of infinity makes no sense; but once reimagined as a process

Super insightful. That's the key right there.

The same concept can also be applied to the physical world. Things are not static, they are in constant flux, everything is a process in motion.


Yes, although for me this conception of infinity as a process also captures why there are probably no actual infinite things in the universe, only in silly games with numbers.


Could you elaborate? Why infinite process cannot be actual thing in the universe? It's not like we know the start and the end date of the universe...


I wonder if this is related to "intuitionist" math. This is an alternative formulation of math which doesn't have the law of excluded middle, recently discussed on Hacker News relating to this physics research: https://www.quantamagazine.org/does-time-really-flow-new-clu...


If you want to work with the real numbers intuitionistically (or constructively), you quickly find out that infinite decimal expansions are not what you want.

In classical mathematics all the usual definitions of real numbers (decimal, Cauchy sequences and Dedekin cuts) are equivalent. If you overthrow the Law of Excluded middle, these are all different.

Infinite decimal expansions are bad intuitionistilcally for several reasons. The first one which come to mind is that you cannot add numbers together. Imagine your numbers started 0.33333 and 0.66666. OK, so far it would seem that the sum would start 0.99999, but somewhere down the line one the firs number could contain two 4s, making a 1 carry all the way up and leaving one behind, so that it should in reality be 1.00000000001…

On the other hand there could also show up a 2 later, making it 0.999999998. Thus, you cannot decide weather the first decimals should be 1.00 or 0.99 without looking at infinitely many decimals. And the fact that 0.999… = 1.000 will not help you out, since 1.00000000001 ≠ 0.999999998.

Being able to define addition on decimal expansions is equivalent for constructivists to solving the halting problem. It cannot be done.

It turns out Cauchy sequences are better behaved, and (with a bit computational improvement) you can make a lot of things work out. See Bihshop's book, Foundations of Constructive Analysis, for details.


Thank you for the reference! Your explanation makes sense to me. I'm curious if students that struggle with learning decimal expansions and limits are just subconsciously rejecting the non-constructive foundations of the math they're learning. Probably not, but it's fascinating regardless.


I wonder if there's anything I can do with my children to prevent them from being bound by this mental limitation?

I would try to explain to them that numbers are a framework for us to understand both the observable universe and abstract ideas, depending on what we're using them for.

Like you said, it's hard for people to understand that numbers have multiple representations and to grasp the implications of those representations. I think that if you can communicate that different representations can have the same meaning, accepting those representations when they come across them may be easier.

Or, if they're experienced enough with math, I think going through Euler's identity in addition to the link could help.

https://en.wikipedia.org/wiki/Euler%27s_identity


I feel like most people are not answering this, they are giving the proof in a different way.

Answering how to teach a child not to be bound to 100%'s is really hard. I personally would say just let them explore on their own, teaching a person that 0.99999 = 1 results in the same as if you taught them that 0.99999 != 1. You need to teach them that all sciences and maths are changing constantly, what might be a fact today could change tomorrow. You need to teach them that anything can become wrong or right as we progress and to be open about accepting new information, while being hesitant enough not to not succumb to false/fake information.

That's a very hard lesson to learn and an even harder one to practice. But one that I think a lot of people need to learn.


I'm willing to believe, but every proof on that page I read came down to basically, it might as well be 1, so it is 1. The way i see it, it comes down to accuracy like any of our measurements and falls under rounding error. There's no way we can ever actually measure the infinite amount of space between .999... and 1 so effectively they're the same. As far as math and anything practical and even theoretical is concerned, they're the same, but...conceptually in my brain it just feels that little bit smaller. I know I'm wrong for all intents and purposes, but I dunno, that.


I find the algebraic way convinces most people:

x = 0.9999..

10x = 9.9999...

10x - x = 9

9x = 9

x = 1


I was prepared to blow my then 6th or 7th grade daughter's mind with this algebraic proof. I started by asking if 0.999... = 1, to which she said "no." I rephrased it and said it is equal, do you know why? She thought for a moment and said "1/9 is 0.111..., so 9/9 is 0.999... and 9/9 is 1." And I had to admit she had a far better solution than I did.


Because I wondered, it’s not a trick:

x = 0.444444...

10x = 4.444444...

10x - x = 4

9x = 4

x = 4/9 = 0.444444...


All the single digit repeating decimals are x/9.

0.111... == 1/9

0.222... == 2/9

...

0.888... == 8/9

0.999... == 9/9 :)


That's how I was taught in my Algebra class


black magic! I wonder if there are any programming languages that are able to handle this properly?




There are programming languages with rational number types, but none that I know of that represent numbers as repeating decimals.


There isn’t a proof because it’s actually kind of arbitrary that 0.999... = 1. Fundamentally, this is true because we chose it to be true.

Now, there are good reasons we chose it to be true, and that’s what people usually use as proofs. If it’s not true then a bunch of mathematical expressions become more inconvenient. But there is no reason as such why 0.999... could not have been defined as something that was always < 1.

Fundamentally, 0.999... has no intrinsic meaning, and it’s value depends on the meaning we decide to give this representation.


You are downvoted, but you are actually correct. 0.999... != 1 can be true in nonstandard analysis. So if using standard analysis over nonstandard analysis is a convention, then ultimately 0.999... = 1 is a convention too.

(The Wikipedia article even reproduces that argument)


.999... and 1 exist on a continuous line. If they are different numbers, name a number between them.


The Wikipedia article says how, you need a definition of real numbers that includes nonzero infinitesimals (IOW does not satisfy the Archimedian property).

So let there be an ω with 0.999... = 1 - 1/ω. Then a number between 0.999... and 1 would be 1 - 0.5/ω.


How do you know that 1/ω is not 0?


{ 0.999... | 1 } using surreal number notation.


? Just because there is nothing between two numbers does not mean the two numbers are equivalent. What nonsense is this


Either two numbers are equal, or they're not equal. (Unless we're calling into question the law of excluded middle, but must we?)


Without using 9999..., please name any two real numbers that don't have a number in between.


That is true for the reals


I've had the same experience, even debating this topic with engineers. I think there are actually two hang-ups.

1. People have had it drilled into their heads that humans can't comprehend infinity. It was taken for granted by philosophers, that an "infinite regression" is a logical fallacy (e.g., used in a proof by Thomas Aquinas), and that tricks such as infinity and the infinitesimal were not rigorous. Mathematical infinity has been a settled matter for all practical purposes since the early 20th century AFAIK.

2. Related to the above, most people also believe that there is always a gap in any knowledge, and something hiding in that gap. Thus it's perfectly natural to believe that there's something hiding between 0.999... and 1, that we just haven't found yet. Knowing for certain that there is nothing between 0.999... and 1 is regarded as a kind of arrogance.

I think the way to approach this with children is to teach math as an abstract topic, that's not necessarily rooted in the objects of everyday life. For instance there's no physics experiment that can test the necessity of any math being carried beyond roughly the 15th decimal place. Yet we enjoy exploring it anyway.


> It was taken for granted by philosophers, that an "infinite regression" is a logical fallacy (e.g., used in a proof by Thomas Aquinas)...

Aquinas specifically objected to the notion of an essentially ordered infinite causal series. He had no objection to an accidentally ordered infinite causal series or other kinds of infinite series.

This distinction is extremely important for the purposes of understanding his proofs of God's existence, and people often unfairly reject his arguments because they conflate the two.

More reading here: http://edwardfeser.blogspot.com/2010/08/edwards-on-infinite-...


Couldn't you formulate a problem for extreme decimal-place accuracy based on the multiplication of errors in a physical process that's repeated in ways that multiply small errors into bigger ones?


That would certainly be an interesting study, I just have never been able to come up with any concrete idea on my own.


I too was unable to convince my family but based on your comment I just thought of a new (to me) example I might have tried. It leverages the grasp of fractions that you mentioned people already have.

Everyone knows that 1/3 = .333... and it can be pretty easily shown that 1/3 + 1/3 = 2/3 = .666...

So I would ask them that since .333... + .333... = .666... does it make sense that .333... + .333... + .333... = .999...? And since .333... = 1/3 isn't .333... + .333... + .333... = .999... the same as saying 1/3 + 1/3 + 1/3 = 3/3? And since 3/3 = 1 and 3/3 = .999... it makes sense that 1 = 3/3 = .999...

This might work on your kids but in my experience recalcitrant people will either act bored as if they don't care or will try to claim that somehow they understood it all along.


What's interesting is that people pretty quickly become comfortable with the idea that 1/3 = 0.333…

So using that as a foothold, we can express 1/3 + 1/3 + 1/3 as 0.333… + 0.333… + 0.333… and it should be pretty easy to digest. At once we can see that in this little zone we've defined, 1 and 0.999… mean the same thing.

Not a rigorous proof, and one or two people will probably bring up whataboutisms like "that's just because the calculator can't do stuff!" but it should at least be proof of comfort for most people.


This is a really good point. Maybe the problem is how we define equality. What's the test for when two numbers are equal?

People accept that 1/3 = 0.333333... The same people don't always seem to accept that 3*0.33333... = 1. Well, how are we defining "equals"? If we can give that definition in black and white, I think that may help.


> What's the test for when two numbers are equal?

I put this in another comment but: For the reals: eliminate a < b and a > b then conclude a = b.


If 30.33333... = 1, then 30.33333... != 0.9999..., then 0.9999... != 1


Maybe try expressing it in the form of money? Let's say a gallon of gas is 99 cents with infinitely repeating 9's. 0.99999999999 cents. You're still going to end up paying a dollar a gallon for it because eventually it's going to get rounded off. No gas store operator is going to try to cut a penny for you and give you a fraction of a penny. They could argue that the fraction of a penny becomes infinitely small and that giving you a shaving off the side of a copper penny would be infinitely too large.

Now don't mind me while I open up a store where every price tag ends in 0.99...repeating and have a poor college student at the checkout lane with a penny shaver to calm down any rowdy customers he or she can't explain away.


What set in stone the equality for me was learning about limits and series, because 0.999... is essentially a funny way to represent a serie.

Before that, despite accepting the proofs that were given to me, there was always something in the back of the brain telling me "mmmm there is something wrong in that". The only thing close to that was a reasoning like the following:

1 divided by 3 = 1/3 = 0.333..., but then 3 * 0.333... = 0.999... so 1 = 0.999...

This comment in the wikipedia page nails it down:

"The lower primate in us still resists, saying: .999~ doesn't really represent a number, then, but a process. To find a number we have to halt the process, at which point the .999~ = 1 thing falls apart. Nonsense."


You must go up to something like limits to make ... meaningful.


"The intelligibility of the continuum has been found–many times over–to require that the domain of real numbers be enlarged to include infinitesimals. This enlarged domain may be styled the domain of continuum numbers. It will now be evident that .9999... does not equal 1 but falls infinitesimally short of it. I think that .9999... should indeed be admitted as a number ... though not as a real number."


Is this a mental limitation, or is it a simple defense mechanism against diving into rabbit-holes of thought with no end and no real productivity? It seems much easier to come to the conclusion that .99999... and 1 are different numbers, is it really worth the effort to consider otherwise?

We create these abstractions to simplify our thought- and analyzing or over-analyzing these simplifications can have the opposite effect.


Infinity will forever be an abstraction, as there is no infinite physical actions you can carry out.

Which means what infinite actions actually constitue is purely by definition, as you cannot experimentally verify it. And that's why under some definition it makes sense to say 1+2+3+4+5+... = -1/12


This is essentially just a matter of limits, without which the world wouldn't make any sense.

So you must explain that if you move your hand closer to an object, technically you are halving the distance infinitely many times, but if 0.999... != 1 then your hand would never touch anything.


> We cannot imagine infinity

Now try imagining that some infinities are bigger than others: https://en.wikipedia.org/wiki/Aleph_number


This is one of my pet peeves in maths.

Although I do understand the concepts presented, the notion of "greater" makes no sense when applied to something without boundaries.

Yet it's used all the time.


Things quickly fall apart if you rely on your intuition.

Let's imagine all the odd numbers: 1, 3, 5, etc. Now imagine all the even numbers: 2, 4, 6, etc.

Can we agree that there is an "infinite" amount of numbers in each of those groups?

Now imagine all the odd numbers and even numbers together. That's also infinite right?

Would you say there are more "all numbers" than just "all odd numbers", or would you say that there are an equal number of them? (hint: the answer is equal).


Well, the answer is equal because of how you define equality of infinite sets (one-to-one and onto). It's a very useful definition, but it's hardly the only possible definition.


Personally, I think it makes perfect sense.

Take two sets A and B. If we can assign every element in A to a different one in B, we say that |A|≤|B|.

Makes perfect sense for normal, finite sets, right? As it happens, this definition extends to infinite sets as well.


Right, but many things make sense for finite sets that don't make sense for infinite sets. Just because you can extend that definition doesn't mean that it's "true" for infinite sets.


People mistakenly believe that decimal numbers don't have multiple representations - which, in a way is correct.

It is correct if you take the limit, people usually do not.


Not only can numbers have more than one representation, but they can also have zero!

Looking at you, irrational numbers.


Irrationals have a unique decimal representation in the mathematical sense: given a definition such as $x^2 = 2 $, any digit of the decimal expansion of $x$ can be determined.


> Irrationals have a unique decimal representation in the mathematical sense

Not all of them do. Actually, so many don't that mathematically, the number of them that do is zero.

Sure, there are exceptions like sqrt(2) and sqrt(3), but there is an uncountable infinity of irrationals between these two numbers that just don't have a representation.


For irrationals, the problem with infinite sequence of digit 9 does not occur. So for any given irrational (given its definition), any digit is determined by this definition, because any irrational number can be approximated with arbitrary precision by a rational number (which has unique decimal expansion). If you think otherwise, where is the problem?

This is not affected by the fact that irrationals cannot be counted. Given the irrational, a rational close enough exists which has the same decimal expansion for the first n digits, for any n.


just tell them to write out the complete infinite sequence of 9's

when they are done they will have undesrtood.

chuckle


Yep. Agree 100%. It is like the blue dress.

I think the problem is the repeating function. Infinite things are non-intuitive and should be presented differently.

Even here on HN you still see people confused about "convergence" and "identity". 0.999... doesn't CONVERGE, it literally is 1.

I suspect this persists even with students that have had second year college calculus that discusses convergent series and sums.


Fine. Define 0.999... as the limit of the series sum(n=1 ... N)(10^-n), as N-> infinity. This is standard high school calculus. "Number" and "series" and "limit" and "convergence" don't all mean the same thing. However this number is defined as the limit of a convergent series. So the question really is meaningful. (One clue that this question is meaningful is the amount of space introductory calculus textbooks use to address it.)

Because I can still ask, in black and white, what law of "equality" do I use to establish that my limit equals 1? (It does, if I import the definition of "equality" from the real numbers. That's what they do in calculus class. )


Thanks to a commenter who pointed out that my sum above should be

sum(n=1 ... N)(9*10^-n).

I can't, uh ... fully endorse that comment, which is not entirely accurate and doesn't answer my question. But I sure did miss the '9'.


You asked for a "law" of equality (whatever that is) and provided an answer that proves it converges to 1. What more could you possibly want?


We seem to agree on this: you don't think there's any need for a way to determine whether two real numbers are equal.

For ordinary math, though, using some criterion for equality (for example x>=y and y>=x) is basic and not controversial. So it seems unconvincing (to me) when you seem to imply the opposite.


There is an easy way to prove two numbers are equal. Typically in the reals there are three possibilities: a > b, a < b, a = b. If you eliminate a > b and a < b then you are left to conclude a = b. And this is exactly what is done in Apostol's Calculus Vol 1 (IMHO the greatest calc book ever written) chapter 1 when he proves that the area under n^2 is EXACTLY (n^3)/3, with no "calculus". You would be shocked how far into calculus the author gets with just that theorem. Can't recommend that book enough.


Thanks, I'll take a look. I like that kind of thing very much.

I use applied math. I haven't taken a class in real analysis. But it's fun how often grinding out the solution to a "real world," practical PDE turns out not to actually be the nicest (simplest and/or clearest and/or sufficiently insight-producing) way to understand the (hopefully) corresponding physical problem in the lab.

Stripping off the "calculus" and replacing it by limits sometimes seems to help highlight alternate perspectives that the magic "integrals" and "derivatives" kind of conceal.

Even when it's not more effective, it's definitely more fun.


> you don't think there's any need for a way to determine whether two real numbers are equal.

You are putting words in my mouth.

And you clearly do not understand the answer.

I guess I'm not very good at ELI5 because I very clearly answered your question with your own proposal.

Maybe when you get to college a professor can do a better job explaining it to you (if you actually make it to college, because you're going to struggle very hard if that's how you think when an answer is spoon-fed to you).


I'm not sure if you are asking for an answer or a rhetorical question? I'll assume the former.

Your terms are bit jumbled, so let's keep it simple: you're asking how to prove if an infinite sum converges and what its value is. Convergence proofs require analytic thought: meaning there may not be an immediate look-up. You need to convert the problem into the known corpus of convergent sums or use one of many tests (bounds test, integral test, etc) to show it converges analytically. Which you only learn through experience and memorization (unless you want to re-prove hundreds of series... maybe you do!) Fortunately this one is easily re-written as a known convergent sum.

First, you missed a term in your sum (9), re-written here:

sum(n=1..inf) 9 * 10^-n

Step 1: you pull out the 9 and it becomes 1/10+1/100+1/1000...

Step 2: Then we shift to n=0 by subtracting 1/10^0 from the series so that it is in the form n=0..k-1

1/10^0 + 1/10 + 1/100 + 1/1000 + ... + 1/10^-n - 1/10^0

Step 3: Now we've got ourselves a geometric series of just 1/10^n .. wikipedia does a great job explaining the sum convergence for GS from n=0...inf: https://en.wikipedia.org/wiki/Geometric_series

Step 4: compute geometric convergence

(1-r^n)/(1-r) = (1-(1/10)^n)/(1-1/10) = 1/(1-1/10) = 10/9

So we have 10/9 as the solution to Sum[n=0...inf](1/10^n)

Step 5: the remaining arithmetic

Now subtract our 1/10^0 ... and then * 9 = 1


> There is no proof that will ever satisfy a person dead-set against this.

Yes there is. There is a proof that uses only fundamentals of first year university analysis. When you see

0.99999....

this can be written as an infinite sum

\sum_{i=0}^\infty 0.9 x 10^{-i}.

Truncate the sum and set

S_n = \sum_{i=0}^n 0.9 x 10^{-i}

and now simply use the rules of arithmetic progressions to get the limit out:

0.1 S_n = \sum_{i=0}^n 0.9 x 10^{-i-1}

S_n - 0.1 S_n = 0.9 - 0.9 x 10^{-n-1}

0.9 S_n = 0.9 ( 1 - 0.1^{n+1} )

S_n = 1 - 0.1^{n+1}

Now let n tend to infinity to find the limit, which is 1.

You don't need to imagine infinity to do any of this.


Personally I've always thought "proofs" using "arithmetic" are right, but kind of stated backwards.

The point is that in elementary school arithmetic, you define addition, multiplication, subtraction, division, decimals, and equality, but you never define "...". Until you've defined "...", it's just a meaningless sequence of marks on paper. You can't prove anything about it using arithmetic, or otherwise.

What the "arithmetic proofs" are really showing that if we want "..." to have certain extremely reasonable properties, then we must choose to define it in such a way that 0.999... = 1. Other definitions would be possible (for example, a stupid definition would be 0.999... = 42), just not useful.

What probably causes the flame wars over "..." is that most people never see how "..." is defined (which properly would require constructing the reals). They only see these indirect arguments about how "..." should be defined, which look unsatisfying. Or they grow so accustomed to writing down "..." in school that they think they already know how it's defined, when it never has been!


The way '...' is used here is perfectly consistent with being defined as a geometric series where the ratio between successive elements is 1/10 and the start term is the final digit. Geometric series always converge when the absolute value of the ratio of less than 1.

I should note that when I learned about rational & irrational numbers in elementary school (I think third or fourth grade), we used a "bar" notation where we'd put a bar over the last digits in a decimal expression that repeated forever (i.e. it corresponded exactly to a geometric series with r = (1 / 10)^k where k is the number of digits under the bar, though we didn't know about that at the time). Our teachers explained that the difference between a rational and irrational number was that there would be no pattern you could ever find in an irrational number that would allow us to use the bar, which is surprisingly accurate for grade school arithmetic.


> Personally I've always thought "proofs" using "arithmetic" are right, but kind of stated backwards.

I've never considered them right at all. By saying something like

0.9... x 10 = 9.9...

and then saying that

9.9... - 0.9... = 9

you're basically just a priori defining 0.9... to be 1. In other words you're basically just defining 0.9... as a symbol to be some number x which has the property that 10x - x = 9. So you're basically just defining it to be 1.

I've never seen a proof of 0.9... = 1 using Peano arithmetic which made any sense to me. I doubt one actually exists in any true logical meaning. Unless you're making use of limits, completeness, or something equivalent I don't see how a proof could possibly make any sense.


You only need to define 0.9… as 9/10 + 9/100 + 9/1000 + …. Without knowing how that series converges you can then use the two expressions mentioned to conclude that it has to be equivalent to 1.


> You only need to define 0.9… as 9/10 + 9/100 + 9/1000 + …. Without knowing how that series converges you can then use the two expressions mentioned to conclude that it has to be equivalent to 1.

Sure you can provide a hand-wavy argument and try to give some intuition if you'd like. That doesn't make it any sort of logical proof though. I guess it depends on what you're after.


The GP's definition is the fundamental definition of the decimal notation. It's exactly what the "0.9..." symbol means.

You can redefine the "0.9..." symbol to mean something else as much as you want, you can have it meaning pi if you like, but then you are just changing the subject on the most unhelpful way.


> The GP's definition is the fundamental definition of the decimal notation. It's exactly what the "0.9..." symbol means.

> You can redefine the "0.9..." symbol to mean something else as much as you want, you can have it meaning pi if you like, but then you are just changing the subject on the most unhelpful way.

I _know_ that.

I think I maybe should just bow out of this conversation. I'm apparently incapable of explaining myself in a way that is understandable to people here. I'll consider this my fault.

I'll just summarize: I don't think any "proof" that 0.9... = 1 that is only expressed in terms of arithmetic operations and does not make use of limits is legitimate. In other words I claim that a proof like "0.9... = x" means "9.9... = 10x" means "9 = 9x" is illegitimate. Instead of taking "0.9... = 1" on faith it takes "10 x 0.9... = 9.9..." and "9.9... - 0.9... = 1.0... = 1" on faith. There's no proof here. It's just shifting around symbols. Of course there are logical proofs, but they make use of limits/completeness/properties of real numbers explicitly.

Feel free to disagree...


Oh, ok. If I understand correctly, you mean that there isn't any standard algorithm for handling a sum with infinite terms if you don't include limits.

Well, I do disagree, not with the above statement, but the meaning of "0.9..." itself requires limits, so the discussion can never go anywhere if your assumptions do not include limits.


FWIW, I believe I understand, and agree with the point you’re trying to make. In most circumstances I wouldn’t have a left comment because there’s nothing useful for me to add, just a thumbs up. But I wanted to make an exception this time.

It’s worth remembering that most people who understand and agree rarely leave a reply.


> you’re basically just a priori defining 0.9... to be 1.

I think the point is not defining 0.9... to be 1, the point is that “...” means an infinite number of 9s. If you shift the decimal point by 1, then nothing changes, there are still an infinite number of 9s. If you shift the decimal point by 5 places, there are still an infinite number of 9s to the right. And here is the logical (induction) step: if you shift the decimal point by an infinite number of places, then there are still an infinite number of 9s to the right. This works for any repeating fraction, in groups of more than 1 repeating digit.

> I’ve never considered them right at all.

Do you mean you disagree with the result, or that you agree with the result but don’t believe the proof is really a proof?


> I think the point is not defining 0.9... to be 1, the point is that “...” means an infinite number of 9s. If you shift the decimal point by 1, then nothing changes, there are still an infinite number of 9s. If you shift the decimal point by 5 places, there are still an infinite number of 9s to the right. And here is the logical (induction) step: if you shift the decimal point by an infinite number of places, then there are still an infinite number of 9s to the right. This works for any repeating fraction, in groups of more than 1 repeating digit.

I've heard this argument many times. I understand the intuitive reasoning. I just don't find it a proof. I mean with reasoning like this why can't you have an infinite amount of 9s and then just put a 7 after that? What's keeping you from doing that? It's just a hand-wavy argument with no rules of any kind of what are allowed.

> Do you mean you disagree with the result, or that you agree with the result but don’t believe the proof is really a proof?

I've never considered that specific "proof" a proof. When 0.9... is given a proper definition of limits and considered within the real numbers, then sure of course it's true and the proof is legitimate.


There is no "after infinity".

You can justify the idea by defining a decimal representation of a number x as a vector x_2, x_1, x_0, x_{-1}, x_{-2} ..., with x_n ∈ {0, 1, ..., 9}. Negative indexes are digits after the comma, positive indexes before the comma. You can recover the original number simply using

x = \sum_{n=-∞}^{n=∞} 10^n x_{n} (1)

(this sum always converges as long as x_n = 0 when n > N, for some big enough N ∈ ℕ).

For example, the number three is represented by x_0 = 1, x_n = 0 otherwise. 0.9... is defined as x_n = 0 for n >= 0, 9 for n < 0. Now, by using limits, we could recover the original number using the formula above. However, we can do it without them. For that, we define just enough operations for the proof.

1. If z = x - y and for all x_n, y_n we have that x_n >= y_n, then z_n = x_n - y_n for all n. 2. If z = 10x, then z_n = x_{n-1}.

For the first operation, in order to be rigorous, we need to ensure that if z_n = x_n - y_n in the same conditions, then z = x - y. The proof of this part just consists of plugging the recovery formula (1): z = \sum 10^n z_n = \sum 10^n (x_n - y_n) = (\sum 10^n x_n) - (\sum 10^n y_n) = x - y. We can perform all those operations as we are guaranteed that (1) always converges.

Now, let y = 0.9... defined as in the example above (y_n = 0 for n >= 0, 9 otherwise), and let x = 10y (therefore x_n = 0 for n > 0, 9 otherwise). Now, define z_n = x_n - y_n, so that z_n = 9 for n = 1, 0 otherwise, which yields z = 9. As we demonstrated above, this implies that z = x - y, therefore 9 = 10y - y => y = 1, so 1 = 0.9... .

PS: I don't think one can make a proof without at least using some bits of limits to be able to switch between decimal representation as a vector and the number itself. However I don't think it's a problem, because you need the same bits to be able to talk of "0.9..." as a well-defined number.


Don't conflate "I don't understand it" with "hand-wavy". The 2 are different things.

Another proof by contradiction I've heard of this is that if 0.9... != 1.0... then there exists a number in between the 2. What is it?


It is hand-wavy, because it is using a vague conception of infinity which doesn't correspond to the real, useful definition of the infinity labelled "..."


> why can’t you have an infinite amount of 9s and then just put a 7 after that?

You can. The proof still works if you do that.

What you’re refusing to accept here is the definition of infinity.


> You can. The proof still works if you do that.

> What you’re refusing to accept here is the definition of infinity.

I'm refusing to accept the definition of infinity? I have no idea what you mean by that. Would you make the same statement were you aware that I do in fact have a PhD in mathematics in the field of analysis? That I have in fact studied logic? Just as a hypothetical scenario.


The proof hinges on the definition of infinity. 0.9bar7 is a completely nonsensical number precisely because of the definition of infinity. Yes, I think you’re failing to accept the definition of infinity. You’re rejecting a proof that many other PhDs in math accept along with some notable mathematicians like Euler. I reject your appeal to authority; having an advanced degree doesn’t mean you’re somehow automatically right, lots of PhDs in math have failed the Monty Hall problem, lots of people with advanced degrees are wrong all the time. I myself am a walking example on occasion.

I haven’t heard a reason yet why the logic of the proof doesn’t work, I’ve only heard that you don’t accept it. What is the reason it doesn’t work?


> The proof hinges on the definition of infinity. 0.9bar7 is a completely nonsensical number precisely because of the definition of infinity. Yes, I think you’re failing to accept the definition of infinity. You’re rejecting a proof that many other PhDs in math accept along with some notable mathematicians like Euler. I reject your appeal to authority; having an advanced degree doesn’t mean you’re somehow automatically right, lots of PhDs in math have failed the Monty Hall problem, lots of people with advanced degrees are wrong all the time. I myself am a walking example on occasion.

I wasn't appealing to authority. I never said that implied that I was correct. (Had I wanted to do that, I would have brought that up much earlier in this comment thread.) I was really just curious if you still believed that I was "failing to accept the definition of infinity" even if you knew that about me. Apparently you do.

> I haven’t heard a reason yet why the logic of the proof doesn’t work, I’ve only heard that you don’t accept it. What is the reason it doesn’t work?

https://news.ycombinator.com/item?id=23007600


Okay, I understand what you’re trying to say. I accept that the proof does not define what “...” means formally, and that is the problem you have with it. The infinity is understood implicitly to have the property that the 9s never end. In that sense, I think the proof does make use of limits, it just relies on definitions not written as part of the proof. Isn’t that okay, doesn’t that actually happen very often? The proof also doesn’t define what addition, multiplication, and equality mean either, but other kinds of mathematicians might complain on those grounds. How complete is complete, and what is the purpose of a proof if not to demonstrate a truth economically, relying on, rather than restating, the already laid foundation? How could this particular proof be shown to have weakness or fail before adding rigorous definitions of limits? Would it be any more apparently true to a wide variety of mathematicians and students if it had the type of rigor you’re advocating?


See my comment here since it's related to this: https://news.ycombinator.com/item?id=23008366

Assuming that all my suspicions in that comment are correct and these proofs actually are invalid proofs (not the results which are true), then the question might become: does it matter if the proof of a fact is incorrect if the fact itself is correct? That is a philosophical question and I'm honestly not sure how I'd answer it...


I don't necessarily understand your use of the word "invalid", when what it seems you mean is incomplete and/or too informal for your taste.

> does it matter if the proof of a fact is incorrect if the fact itself is correct?

Your language isn't allowing for a notion of precision, or for multiple forms of correctness, and it's not considering audience, communication or level of expertise either. I don't think it's a question of correct vs incorrect, I think you're asking for more precision, and/or for a form that meets your own higher standard.

It does matter if a proof is wrong, if there is a step in the proof that can be shown to be false. But that's not the case here, what you want is additional definition.

BTW, reading the blog post you linked to on surreals, the "proof" looks to me to be more hand-wavy than Euler's proof that .9bar = 1. The proof begins by stating there are a finite number of 9s, in direct contradiction to the hypothesis. 10^-inf = 0, so from this blog post I don't yet see any reason why surreals clarify anything here, it feels like the opposite, it feels like obfuscation.

This could be an argument over representation and not the values of numbers. If you start by defining 0.9bar to be a different number than 1 for the specific reason that it's written down a different way, then fine. That's what the surreal "proof" tells me. Euler's proof is talking about the value of 0.9bar in the limit, not the representation. (Even if that's stated without rigorous definitions of limits.) The proof is saying the values of .9bar and 1 are the same. If the surreal number .9bar were strictly less than 1, that must mean there's another surreal number closer to 1, but there isn't, so I don't accept the surreal argument as valid logic, other than playing a semantic trick by saying 'look I defined they way we write a number to be meaningful, therefore 0.999... is by definition different than 1.'

By the way, that blog post claims "The set of real numbers contains no infinitesimals." Wikipedia claims: "the surreal number system is a totally ordered proper class containing the real numbers as well as infinite and infinitesimal numbers..."


I'm only going to respond to a few chosen points here because I find your post kind of meandering and hard to follow. I'm not cherry-picking (I don't care about "winning" any argument), I'm just trying to focus my response a bit so it increases the likelihood that you'll understand what I'm trying to say.

> I don't necessarily understand your use of the word "invalid", when what it seems you mean is incomplete and/or too informal for your taste.

> [...]

> It does matter if a proof is wrong, if there is a step in the proof that can be shown to be false. But that's not the case here, what you want is additional definition.

I'm going to try to be more formal, but not entirely formal since (1) the details are almost never-ending and require a lot of formal logic and (2) I'm not 100% sure about the reasoning myself.

What I mean by a "proof" is a sequence of logical steps that start with axioms of your logical system. We are obviously not looking at things that formally, but I actually think it is still an important point. The rational numbers can be thought of as being defined by certain axioms of arithmetic. E.g. you have the natural numbers as well as the minimal extra values so that you can add, subtract, divide, multiply, etc.--in other words you have a field. Let's call these the arithmetic axioms. Then when you go to the real numbers you basically extend the rational numbers in such a way so that you have completeness and retain all the previous properties. So basically for real numbers you have the prior arithmetic axioms and you have the completeness axiom.

Next comes the question of what is actually to be proved. For that we _must_ make some sort of definition of what we mean by "0.9...". Let us define it as the limit of the sequence of partial sums (all of which are rational numbers) _if_ it exists. So to prove "0.9... = 1" means to prove that the limit exists and equals 1.

So lets say we believe we have a proof that the limit equals 1 using an argument like this:

0.9... = x => 9.9... = 10x => 9 = 9x => 1 = x => 0.9... = 1

This is not a formal symbolic proof, but there is one thing we can see immediately: this proof does not make use of the completeness axiom. Therefore it should logically be the result of a sequence of logical steps starting from what I earlier referred to as the arithmetic axioms. Now here is the point where the surreals come in. The point with the surreals is that they contain rational numbers and the arithmetic axioms still apply. (To be clear, I haven't thought this through 100%, but I am almost certain this is true modulo my hand-wavy reference to "arithmetic axioms".) That means that that same proof should work inside the surreal numbers to also prove that the limit of the partial sums is 1. But here is the key important point. Within the surreal numbers, the limit of the partial sums is _not_ 1. So what does this tell us? The original supposition that there exists a proof only making use of the arithmetic axioms cannot be true. So the proof must make use of the completeness axiom. Well the surreal numbers is _not_ complete so that axiom doesn't exist there and therefore the fact that the proof exists and works within the real numbers and not the surreal numbers is not a contradiction.

Okay this is a bunch of logic mumbo jumbo and no grade student should be expected to worry about things at this level, but we can still give a more simplified version of the proof that actually is in essence correct. Some people in this thread use the argument: "Well since we know the number must be less than or equal to 1 and we also know it must be bigger than any number smaller than 1, then it must be 1." That argument (while a priori assuming convergence) is at least implicitly using an intuitive idea of completeness. Is it logically formal and 100% rigorous? Of course not. But it is explicitly making use of a property of real numbers while the first proof is not. I think if students are to be taught anything about the real numbers, then they should get some general intuition for this property. The algebraic version of the proof is invalid (my claim, which hopefully is at least a little more well-supported given this comment here) and it doesn't give the intuition they should (hopefully) get anyway.

In any case, hopefully this does some to help clear up what I've meant throughout these posts.

P.S. Finally, in response to this:

> By the way, that blog post claims "The set of real numbers contains no infinitesimals." Wikipedia claims: "the surreal number system is a totally ordered proper class containing the real numbers as well as infinite and infinitesimal numbers..."

So what? Are you saying those statements are contradictory? If so, how? And if not, what are you saying?


Sorry, I misread the blog post’s statement about infinitesimals, I thought it said surreals.

How did Euler actually write his proof, do you know, or have a link? I’ve poked around online but can’t find it.


I usually hear people say that he wrote the algebraic one (i.e. the one that I'm saying is invalid). But I also have no idea where he supposedly wrote it so I can't verify it. For me it's more hearsay.

By the way I wouldn't hold it against him if he did write the proof that way. I'm pretty certain all the logic/model theory that comes into play came long after his death. The surreal numbers certainly did.


>if you shift the decimal point by an infinite number of places, then there are still an infinite number of 9s to the right

>0.9bar7 is a completely nonsensical number

For the same reason that 0.9...7 isn't a meaningful number, you cannot move the decimal 'an infinite number of times' and then after this, look at what number you have left and see it still has infinite 9s left. It's like you're trying to perform transfinite induction on the set of numbers generated by moving the decimal point. You can only move the point a countable number of times, so there is no sense in which the property can still be true after infinity many times.


> For the same reason that 0.9...7 isn't a meaningful number, you cannot move the decimal 'an infinite number of times' and then after this, look at what number you have left and see it still has infinite 9s left.

Yes you absolutely can, for exactly the same reason. 0.9bar7 is nonsensical precisely because you can move the decimal to the right an infinite number of times, and still have an infinite number of 9s before the 7.

> You can only move the point a countable number of times

Not true, the implicit definition of “...”, the very statement that there are an “infinite” number of 9s, means exactly the opposite of what you claim, it means you can move the decimal an infinite number of times.


In your proof you say

>And here is the logical (induction) step: if you shift the decimal point by an infinite number of places, then there are still an infinite number of 9s to the right

This is not how induction works. The induction shows that you can shift the decimal point any finite number of steps to the right and there will still be infinite 9's after it. If you want to show something is still true after infinity steps, you require transfinite induction, but this doesn't make sense because the '...' decimal representation only represents a countably infinte number of 9's.

This is the same reason 0.9bar7 doesn't make sense - because decimal representations only have countably many digits.


>because decimal representations only have countably many digits

This should rather say that it's because decimal representations have digits indexed by the natural numbers I guess, rather than by any larger countable ordinal


Limits and completeness are convenient shorthands here. But let's start from a more basic perspective.

You say

> you're basically just defining 0.9... as a symbol to be some number x which has the property that 10x - x = 0.

Okay, well what is an alternate definition that makes more sense intuitively?

0.333...., for example, is one that seems pretty intuitive. We can get to .333... by iterated long division of 1 by 3.

    3 | 1
        0    (3 * 0 = 0) => 0.
        10   (add zero)
         9   (3 * 3 = 9) => 0.3
         10  (add zero)
          9  (3 * 3 = 9) => 0.33
          ...

And we can verify the reverse by doing the same trick above; 0.333... = 10 * 0.333... - 3 => 3 = 9 * 0.333... => 0.333... = 3 / 9 = 1 / 3.

So does this trick always work? If we have a repeated decimal, can we always multiply by 10 ^ (length of repeated sequence), subtract off, and get the value of that repeated decimal? If so, then it is reasonable to say that 0.999.... is equal to 1.

We can't really go in the forward direction without cheating (that is, going from 1 -> .999...); the best we can do is to modify long division to allow us to do it:

    3 | 3
        0    (3 * 0 = 0)  => 0.
        30   (add zero)
        27   (3 * 9 = 27) => 0.9
         30  (add zero)
         27  (3 * 9 = 27) => 0.99
         ...
And so on.

Obviously this isn't in Peano arithmetic exactly, but I think it holds together. If we allow repeated decimals in general to be valid representations of rational numbers, then we have to accept 0.999.... is equal to 1.


> I've never seen a proof of 0.9... = 1 using Peano arithmetic which made any sense to me. I doubt one actually exists in any true logical meaning.

Peano arithmetic only covers nonnegative whole numbers, so one will never exist.


> Peano arithmetic only covers nonnegative whole numbers, so one will never exist.

Thank you for the pedantism. How about I replace "Peano arithmetic" with the "operations of multiplication/addition/division/etc. expressible upon the rational numbers"?


You bring the Peano arithmetic, I think people who respond to you can discuss Peano arithmetic without needing to be accused of pedantry.

(Disclosure: I have no idea what Peano arithmetic is.)


Given that mcphage understands what Peano arithmetic is, I find it entirely unbelievable that he/she did not understand my point especially with phrases of mine like "Unless you're making use of limits, completeness, or something equivalent". So yes in this case I honestly find the pedantry entirely unhelpful.

Regardless, the point has been clarified, but discussing it further is kind of pointless.

edit: It just occurred to me that the original pedantic comment essentially breaks the following Hacker News guideline:

Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.

I just found it a bit interesting considering it is my response complaining about the pedantry and not the pedantry that is receiving downvotes.


It's not a rational number.

Real numbers are defined as an equivalence class such that if the differences of two infinite sequences of rationals tend toward zero, then they are equal. The difference between 0.999... and 1.000... clearly tends towards zero as it heads of to infinity, and so they are equal.

If you want to argue that it doesn't then you have to come up with some other definition for numbers which have an infinite decimal expansion.

(Technically, of course, 1 is a rational number, but if you're using 0.9999... to represent it, you're using a real number representation, so you're bound by the definition)


> It's not a rational number.

> Real numbers are defined as an equivalence class such that if the differences of two infinite sequences of rationals tend toward zero, then they are equal. The difference between 0.999... and 1.000... clearly tends towards zero as it heads of to infinity, and so they are equal.

> If you want to argue that it doesn't then you have to come up with some other definition for numbers which have an infinite decimal expansion.

> (Technically, of course, 1 is a rational number, but if you're using 0.9999... to represent it, you're using a real number representation, so you're bound by the definition)

I'm not sure why you think I don't know the difference between rational and real numbers, but I assure you I do. What I said was I don't see how a proof involving the standard arithmetic operations found within the rational numbers, but not including any concepts of limits, completeness, etc. is invalid. Let me know if you still don't understand my point.


I think you have mis-stated yourself. Either that, or I don't understand what you're saying.

Trying to rearrange it and remove as many negatives as possible, I started with your statement:

> I don't see how a proof involving the standard arithmetic operations found within the rational numbers, but not including any concepts of limits, completeness, etc. is invalid.

I think what you mean is that any proof that does not use the concepts of limits and completeness is going to be invalid.

That seems clear to me, the reason being that one needs to define what one means by the sequence of symbols "0.9999...".

You can say "It's infinitely many 9s stretching off to the right", but that doesn't tell me what it means.

People seem to think it does, but when I dig deeper, they usually don't have any sense of what it means. And therein lies the problem (as I see it). People blithely write the glyphs, but don't have a concrete interpretation.


Here is a comment where I tried to clarify, but yes you seem to basically understand my point:

https://news.ycombinator.com/item?id=23007600


My personal experience is that people want to argue from intuition about what 0.9999... means, and when you try to make it precise they say that it's obvious. Then they derive all sorts of nonsense and conclude that mathematics is all rubbish.

If someone really wants to understand it then I'll explain current mathematical thinking, including non-standard analysis and the surreals. But most people don't want to put in the work to understand how these issues have been resolved, and just want to argue from their intuition.


Someone else here brought up the surreal numbers and my intuition says that it's right to do that. The various arithmetic proofs thrown around here don't explicitly make use of completeness. As such they should be correct proofs in the surreal numbers as well. But they basically are not. Here is a blog post about it:

https://thatsmaths.com/2019/01/10/really-0-999999-is-equal-t...

I don't quite know how to formalize it, but I'm pretty certain that if these proofs logically worked (in the "theory of proofs sense"), then they should work in the surreals as well.

Anyway it's just intuition. My main point in this thread is that I don't really accept the proofs of this that don't use completeness as a step. Though I do suspect that proofs not making use of it are actually incorrect proofs in their own right. If I were curious enough I'd think back about formal proofs and models and all that jazz, but I probably already have spent more time in this thread than I should. :)

edit: The more I think about it I feel like someone actually explained to me this (i.e. why this proof is wrong using surreals as reasoning) a long time ago and I'm just remembering echos of it in my mind. Wish I could remember something more useful...or that I were a logician...


Yes that proof depends upon the representation in text of rational numbers (a dot and a series of digits). Try it in hexadecimal - it becomes opaque nonsense. Without some mathematical basis for 0.9... X 10 being something, there's a dangerous dependency on the representation that makes many folks uneasy.


I don't see how it would be significantly different in other bases. In hexadecimal it'd be 0x0.ffff.. * 0x10 = 0xf.fff..

The rules are essentially the same.


Oh right! My mistake. Was stuck in my head with "10 == nine plus one" instead of your insight, multiply by '10' in whatever base.

Anyway it's still an artifact of representation. If rationals were represented as fractions, then it is unrepresentable. 1/1 X 10 == 10/1?


Here's one stupid, simple proof that doesn't require any predefinition.

We do longhand division of 9 by 9, but start by putting a 0 in the first place.

        0.99
    9 | 9.0000
        0
        9 0
        8 1
          90
Hence, 9 / 9 = 0.999 = 1.


What's very neat is that the algebraic argument also works for other sequences like 0.888...

x = 0.8...

10x = 8.8... = 8 + x

9x = 8

8/9 = 0.8...


And I thought mathematicians would reject normal arithmetic operation over the domain of `N...` elements. They're always so ultra rigorous to classify what is or is not, what is defined, the domain .. but then they let an infinite sequence be treated as any finite number.


> but then they let an infinite sequence be treated as any finite number

Technically it’s an infinite series (sum of an infinite sequence), which is a finite number in certain cases like this one.


certain cases, so define this special one but frankly it doesn't make my brain happy (but who am I)


It's not a special case. It's the standard general definition of the limit of a sequence. The limit of a sequence a_1,a_2,...,a_n,... exists if and only if there exists some L (the limit) such that, for any (arbitrarily small) epsilon > 0, there exists some N with the property |a_n - L| < epsilon for every n>N. In this case a_n is n 9s after the decimal point (or a_n = 9*sum_{i=1}^n (.1)^i).


> The point is that in elementary school arithmetic, you define addition, multiplication, subtraction, division, decimals, and equality, but you never define "...". Until you've defined "...", it's just a meaningless sequence of marks on paper. You can't prove anything about it using arithmetic, or otherwise.

Sure, but the point of "elementary school" arithmetic is not "elementary" arithmetic, as a mathematician would define it :-)

The goal is to teach people to reason by matching patterns. Deductive/Inductive reasoning can slowly proceed from that, as they try to frame their intuition for patterns into increasingly more general abstractions.


if we say that infinitesimals exist. that 1/3 != 0.33.. and 1 != 0.9999... and the probability of possible events is never 0.

what are the properties that we would lose?


If we say that infinitesimals exist, it still happens that 1 = 0.999…. It just happens that 0.999 ≠ 1 - 𝛚.

0.999… = 1 is a property of the way we write some rational numbers, not of the number system itself.


wouldn't 0.999.. be equal to 1 - 10^w since it's only a countably finite series of nines.


0.999...9 with a countable finite amount of nines is clearly less than 1.

0.999... with infinite nines is equal to 1.


sorry I typoed I meant, countably infinite. by w I meant the ordinal.


I have never seen a number system with infinitesimals where the addition wasn't updated to ignore smaller classes if they come with larger ones.

That is, for any number system I've seen, 1 = 1 + dx, and infinity = infinity + 100.


The obvious one seems terrible enough, that division is no longer the inverse of multiplication: (1/3)*3 != 1


Nonstandard analysis exists (with infinitesimal and infinite numbers) , but 1/3 and 9/9 is the same there. The problem is that the numbers 0.333... and 0.999... don't really exist.


Completeness is one of the most important properties of real numbers. Basically, you will have to completely throw away real analysis.


1/3 does equal 0.3.. though.


A formally rigorous proof of this (in Metamath) is here:

http://us.metamath.org/mpeuni/0.999....html

Unlike typical math proofs, which hint at the underlying steps, every step in this proof only uses precisely an axiom or previously-proven theorem, and you can click on the step to see it. The same is true for all the other theorems. In the end it only depends on predicate logic and ZFC set theory. All the proofs have been verified by 5 different verifiers, written by 5 different people in 5 different programming languages.

You can't make people believe, but you can provide very strong evidence.


It depends on more than just ZFC, also on the definitions of the real/complex numbers. The crux of the proof is that 0.99999... is being constructed within the real/complex numbers, and in that system it is equal to 1.

And at the point where students see this, the whole concept of real numbers and infinity is usually ill-defined. I actually understand the scepsis for this theorem and where it comes from. The proof relies on the existence of a supremum, which is non-trivial.


I think this is spot on, at least for me personally.

I am not very good at mathematics, so I never questioned my professors when they said that "You cannot treat infinites as regular numbers".

Perhaps due to that statement, I did not really pursue these kinds of equations. For instance, I do not really see how the algebraic argument on the Wiki is any different from:

  2 * inf = inf
  inf + inf = inf   (subtract inf from both sides)
  inf = 0


There is this phrase, often used when describing the decimal expansion of pi - "keeps going infinitely". This phrase is not exactly incorrect, but I wonder if it misleads people into thinking that an "infinite decimal" is "a kind of infinity", which it really isn't in any meaningful way.


I think it absolutely gets confused.

Infinity, the number, is routinely confused with creating an onto function mapping digits of pi to a set with a cardinality of the natural numbers. But sadly most people don't have the mathematical maturity to understand the difference when they encounter their first irrational number (normally pi).


That causes a contradiction which is why infinity can't be used that way. But what is the contradiction with 0.999... = 1?


Multiply both sides by ‘x’, then subtract one side from the other, then take the limit as x -> inf. This is obviously undefined. To get to zero, you have to make a new rule that one form of infinity is bigger than another form of infinity.

Infinity is very slippery, and there are several divergent fields of math that depend on particular definitions of it.


It's true that it depends on the definitions of real/complex numbers. Many other things turn out to be provable from ZFC. A discussion about this, from the viewpoint of Metamath, is here: http://us.metamath.org/mpeuni/mmcomplex.html


Very cool page!

The only interesting step is step 32, which is just an application of http://us.metamath.org/mpeuni/geoisum1c.html, whose only interesting step is step 21 which is just an application of http://us.metamath.org/mpeuni/geoisum1.html.

They key steps for that are http://us.metamath.org/mpeuni/geolim2.html and http://us.metamath.org/mpeuni/isumclim.html , which is indeed the crux of the issue


I wonder if this particular example does the opposite. It's too rigorous for a normal human. It seems humorously rigorous to state things like 10 is not equal to 0, and 10 is a real number and 1 is less than 10, not to mention the number of seemingly redundant repetitions showing how each of the numbers discussed is a complex number, separately an individually. Does defining 1+9=10 make the proof more believable?

Rigorous proofs like this are for mathematicians and computers, I doubt they help anyone believe who doesn't believe.

I'm not sure how best to help someone who doesn't believe, but it could take arguments with stronger intuition, or just allowing the person to demonstrate with their own proof. It probably depends on the person, and why they don't believe it.


That is cool! I've been thinking about building a site that lets you explore a big graph of math proofs for a long time, since doing my pure math degree. Glad to see someone else has done something like it.


The proof relies on the assertion that the supremum of an increasing sequence is equal to the limit. This is mathematical dogma, and should be introduced as such. Once that is accepted, it becomes obvious.

This is illustrative of what I see as a fundamental problem in mathematics education: nobody ever teaches the rules. In this case, the rules of simple arithmetic hit a dead end for mathematicians, so they invented a new rule that allowed them to go further without breaking any old rules. This is generally acceptable in proofs, although it can have significant implications, such as two mutually exclusive but otherwise acceptable rules causing a divergence in fields of study.

When I was taught this, it was like, “Look how smart I am for applying this obtusely-stated limit rule that you were never told about.” This is how you keep people out of math. The point of teaching it is to make it easy, not hard.


This is in large part due to the difficulty with reasoning about infinite representations. You do have to add axioms to your system to be able to reason about 0.9999...

Stating that 0.9999... = 1 without exposing these new tools meant to grapple with concepts that physically cannot be grappled with is a huge mistake.


And this I think is the real issue. When someone says that 0.999... = 1.0, what they are saying is that this is true given a number of assumptions that we are taking for granted that would not be obvious to a non-mathematician. There's a lot of math hiding in those '...'.


What? 0.999... = 1 is not dogma. Please don't spread misinformation. And at least read the link before commenting on something.


Did you read what @jl2718 posted? Namely:

> the supremum of an increasing sequence is equal to the limit

-- this is not misinformation (and to anyone familiar with some introductory analysis, correct[1]). Of course, calling it "dogma" is a bit inflammatory, but not technically wrong. It's kind of of a made-up rule to help us work with infinities (particularly in ℝ -- but it happens all the time in set theory, as well).

But to agree with GP, touting it as "intuitive" or "mind-blowing" is indeed silly.

[1] http://www.math.toronto.edu/ilia/Teaching/MAT378.2010/Limits...


I do think it is technically wrong to call it dogma - the decimal is a geometric series with a limit, right? And limits have an unambiguous definition, it's the smallest value that the series approaches but never exceeds as it tends to infinity. I think the part that is admittedly weird is that the notation "0.999..." refers to the limit as the series tends to infinity, and it kind of hides that fact from you. Even just writing the geometric series down and plopping "infinity" as the value for x would be wrong, as it's the limit that is equal to 1 as x tends to infinity. So there's arguably more hidden notation than the ellipses implies, but nothing is pulled out of a hat here or defined for definitions sake.


> the decimal is a geometric series with a limit, right..

Right, but that's not really the crux of the matter. Hint: look at how the supremum is defined[1]. The definition of the supremum is how we end up with 0.999... = 1.

[1] https://math.stackexchange.com/questions/1977204/limit-of-mo...


I suppose my point is that you could turn the repeating decimal into an infinite series, and a student might accept that, and you could define the suprememum and they might agree that it is 1, but then you ask them if the series is equal to the supremum, so they don't know what to do with the series, so they turn it into a sequence. Now they ask whether the last item in the sequence is equal to the supremum. Of course not! This is by definition.

And now you realize that you and the student have been operating by different rules. Their rules of equality are based on symbolic equality, so you actually have to relax the rules a bit to make limit equality work. And then, more importantly, you have to show that all the other rules are still intact. Actually, in this case, they aren't. Symbolic equality involving infinity is now horribly broken, and you have to express all equality in terms of limits to maintain consistency. Explore this further and you keep finding more inconsistencies that have to be settled by new rules that define new areas of mathematics.

So who is right? The natural world appears to be much more permissive than limit equality, preferring epsilon-equality. Symbolic equality is the only purely self-consistent system, but you can't do much with it. It's also possible that the natural world works with symbolic rules (quantum) but the complexity is great enough to resemble epsilon equality (continuum).

So, .999... == 1 by tautology. It's not some brilliant mathematical insight. The interesting part is the consequence of defining it as so.


I remember being doubtful when being presented with this in middle school, but after being shown this as fractions makes it obvious:

      1/3 =     0.333..
  3 * 1/3 = 3 * 0.333..
      3/3 =     0.999..
        1 =     0.999..


I don't mean to troll you, but if you were doubtful that 0.999... = 1, then you should also be doubtful that 0.333.. = 1/3. Any argument that 0.999... is not quite 1 can also be used to argue that 0.333... is not quite 1/3.

I think it's mostly a matter of definition, since mathematicians consider sums of infinite series equal to their limit (if it's finite), i guess for many practical reasons. If you accept this, then 0.999... = 1. If you don't, then 0.999... can't be assigned a value (but converges to 1), which may be the intuitive understanding of infinite series for some.


> if you were doubtful that 0.999... = 1, then you should also be doubtful that 0.333.. = 1/3

I disagree. Any middle school student can calculate 1/3 to be 0.33333... using long division, but there's no immediately obvious way to go from 1 (or 1/1) to 0.9999...


> Any middle school student can calculate 1/3 to be 0.33333...

I can just do it backwards - is 1/3 equal to 0.33333...?

1 / 3 = 0.33333... 3 * 0.33333... = 0.99999... and my child brain "knows" that 1 != 0.99999...

In my child brain this proves that 1 / 3 is not equal to 0.33333..., it's just an approximation.

So I agree with larschdk, those problems are equivalent and one can't be used to prove the other ...


> Any middle school student can calculate 1/3 to be 0.33333... using long division, ...

...the same way That Chuck Norris can count to infinity... twice!


I smart middle-schooler is absolutely capable of understanding that dividing 1/3 results in an infinitely repeating sequence of 0.33333... Even without understanding the concept of infinity, they will quickly realize that there's no reason to believe the problem will stop adding a 3 to the end of the result with each iteration.


> I smart middle-schooler is absolutely capable of understanding that dividing 1/3 results in an infinitely repeating sequence of 0.33333..

And how will I smart middle-schooler know that the result of running the long division algorithm is exactly 1/3, rather than some approximation.


Recursively?


> Any argument that 0.999... is not quite 1 can also be used to argue that 0.333... is not quite 1/3.

Yes, but there aren’t good arguments for either of them, and that’s the point. The difference is that you have probably already learned how to divide 1 by 3 and have thus convinced yourself that 1/3 does indeed equal 0.333 repeating. It’s not so simple to come to the conclusion that 0.999 repeating equals 1 from simple long division that you would encounter in grade school.


See my other reply along the same lines. I was thinking about adding this caveat to my original message as well, but I think understanding that decimal numbers with infinite digits exist and that 0.999.. = 1 are separate things. The second being less intuitive.


I agree. The argument ignores the rule that infinity is a point which can never be reached; it can only be approached. So repeating 9s infinitely many times will still not reach 1, it will only approach it.


Another secondary school 'proof'

  x = 0.9999.....
  10x = 9.9999.....
  (10x -x) = 9x = (9.9999.... - 0.9999....) = 9
  x = 9/9 = 1


or,

      x = 0.9999...
    10x = 9.999...
    10x = 9 + 0.999...
    10x = 9 + x
     9x = 9
      x = 1
Presented slightly more clearly

https://en.wikipedia.org/wiki/0.999...#Algebraic_arguments


Yeah that’s way more complicated than it needs to be and I’m tempted to replace that whole section with:

x = 0.9999...

2x = 1.9999...

2x - x = 1

x = 1


It's much more intuitive that 100.9999...=9.9999... than that 20.9999...=1.9999...


You've met the formatting limitations. You mean:

"It's much more intuitive that 10 * 0.9999... = 9.9999... than that 2 * 0.9999...= 1.9999..."


How do you jump from

x = 0.999...

to

2x = 1.999..


Picture an infinite ticker tape in your mind.

Write

   0.9
 + 0.9
 —————-
   1.8

Now keep extending the 9’s. You have to carry a one, so fix the carry and then add more nines.

Where people keep getting tripped up is thinking you can stop when you get tired, or die, or when the universe ends in heat death. You don’t get to stop. You never get to stop. It’s nines all the way down.


It’s less intuitive, but still relatively straightforward from performing the “long” multiplication. You do need to convince people that the “8 at the end“ is irrelevant, however.


You need to convince people there is no 8 at the end because there is no end.

If you see an 8 it’s because you didn’t carry the 1. Keep going.

Another way to think about it is that you have n digits, and I’m using the n and n + 1 digit at the same time. But since n goes to infinity, +1 hardly matters.


The difficult part is to convince that (9.9999.... - 0.9999....) = 9. Someone else mentioned about Zeno's paradox involving Achilles and the tortoise. Since you multiplied x by 10 to get 9.9999...., you know that it's always going to be 1 decimal place slower in pace to reach convergence compared to 0.9999.... What you might get instead is (9.9999.... - 0.9999....) = 8.9999....


I like this one better:

    0.9        = 1 - 0.1
    0.99       = 1 - 0.01
    0.999      = 1 - 0.001
    0.9999     = 1 - 0.0001
    0.99999... = 1 - 0.00000... with a 1 at the end of the infinite series of 0


>[...] at the end of the infinite series of [...]

...uhh


"Eternity feels a bit too long ... especially near the end."


> 10x -x

subtracting infinities is dangerous, you can achieve any result from it

https://www.youtube.com/watch?v=-EtHF5ND3_s


You're not subtracting anything infinite. Whatever you think of 0.99999... (and the correct thing to think is that it equals 1), I hope we can both agree that it is at least finite! I mean, we can agree that it's less than 2 and more than 0, right?

That subtraction is just as valid as saying 0.333... + 0.333... = 0.666..., or that 1/3 + 1/3 = 2/3.


Well, if I could choose I wouldn't personally accept 1/3 = 0.333... . But rather, I'd say it equals a limit:

    1/3 = lim(N -> oo) 0.3{N}     (3 is N times repeated)
Especially I would distinguish between infinitely many threes, and N threes, where N goes to infinity. In the first case, you would still be missing an infinitisimal amount, in the latter case you have the usual situation and the sequence has the least upper bound of 1/3.

When you are calculating a limit, you can never just plug in the value for N (say if N is in the denominator and the limit goes to 0). Why should you be able to do this when N is infinity?

At least this is my personal justification why I find non-standard reals interesting. They also justify the nice calculation method where you can cancel out 'dx'es from fractions.


But you absolutely can evaluate that limit as N goes to infinity and correctly conclude that 1/3 does equal 0.333 repeating.


Repeating decimals may be introduced in a mathematics education long before any other infinite series or the methods used to tame them, such ass limits.


I see the introduction of a conceptually incomplete notion as one of the big hurdle of school-level math: without limits, children are asked to just adopt the axiom that there is such a thing as a concept of infinity with any form of practical usefulness, and this riles.

It's only with limits and proper formalism that I was reconciled with maths that frankly were just tending towards approaching an equality with bullshit.


I remember a conversation I had with my daughter in the car when she was starting out with algebra...

Me: Is 9.999... the same as 10, or is it just really close to 10?

Kid: Really close. It never gets all the way there.

Me: Well then how close? What do you get when you subtract 9.999... from 10?

Kid: (pause) An infinite number of zeroes. . .and then a one. . .wait, you can't do that.

Me: Right. You just have an infinite number of zeroes. Which is zero.

Kid: (pause) Oh, that's mind-blowing.


> An infinite number of zeroes. . .and then a one. . .wait, you can't do that.

why not? why can't an infinitely small number exist?


It can, and infinitesimals are a part of so-called nonstandard analysis, but you cannot write an infinitesimal using decimal notation. "0.999…1" is simply meaningless, a contradiction. If you have a "…" it means there's no place where you could put a "last" digit. If "0.999…1" doesn't feel impossible enough, then what would "0.999…9" mean?


People in this thread seem to think that “9 repeating” is not an infinity of nines but is instead “write or think of nines until you get bored and then write something else”


Because if it has 1 at the end, then this will mark it’s end, thus making it finitely small.


I personally like using fractions of 9.

  1/9 = 0.111...
  2/9 = 0.222...
  3/9 = 0.333...
  ...
  8/9 = 0.888...
  9/9 = 0.999...
What's neat is that this trick works for any repeating decimal, with any number of digits in the repeating part. For instance:

  123/999 = 0.123123123...
  999/999 = 0.999999999...
Multiply or divide by powers of 10 as necessary to shift the decimal point, and add the non-repeating part.

Once you accept this mapping, it's trivial to treat 0.999... as 9/9 (or 99/99, or 999/999, etc). Which can be simplified to 1.


Nice to see a few different proofs/intuitions here. Not being a big fan of symbol manipulation, I always felt partial to the proof/intuition that you couldn't find another number between the two :-)


Well, here you reduced 1=.9999... to 1/3=0.333... What if I don’t believe that second equation.


What if we recursively defined 1/3? This will allow us to ignore the infinite 0.333... for a second. As an example, let 1/3 = 0.33 + 1/100(1/3).

A definition of a third that most people agree with is that if we multiplied that value by 3, we should get 1. Let's check the right hand side: 3 * (0.33 + 1/100(1/3)) = 0.99 + 1/100 * 1 = 0.99 + 0.01 = 1. Great!

What other expressions for a 1/3 can we come up with? If you agreed with the previous statement, then you must surely also agree that 1/3 = 0.333 + 1/1000(1/3).

Inductively, we should be able to come up with a general formula that 1/3 = bar(3, n) + 1/pow(10, n)(1/3), where bar(3, n) = sum i = 1 to n 3/pow(10, i). We can check that bar(3, 2) = 0.3 + 0.03 = 0.33, and that our first example fits this formula. Intuitively, this formula is giving us a way to represent 1/3 in terms of n decimal places of accuracy and a recursive term.

The question is now, what happens when we run that formula with n to infinity? An infinite level of accuracy! That expression is equal to 0.333... as we have defined.

The right term, 1/pow(10, n)(1/3), goes to 0, so we can discard that. The left hand side, is a geometric series with 1/10 as the power, and a scalar multiple of 3. Using a closed sum formula for that [1], we can see that the left hand side goes towards 1/3. (Apply the formula from Wikipedia, but remember our index starts off at 1, not 0.)

In the end, we have found that 0.333... = n->infty bar(3, n) + 1/pow(10, n)(1/3) = 1/3

[1] https://en.wikipedia.org/wiki/Geometric_series#Sum


As in the 0.333... will stop at some point? That would still mean that 3 time 0.333... with a LOT of 3s end up being being 1.

I also figure it's a bit more intuitive for pupils to just try out calculating the decimal representation of 1/3 and seeing that it'll just keep going forever.


more like 1/3 != 0.3333 ....

as in 1/3 does not have a decimal representation. you can only approximate it but never reach it.


This also happens to be the test for whether your calculator is any good.


My high-quality scientific calculator makes this a bit uninteresting:

  1/3            ⅓
  Ans * 3        1


My 5 year old stumped me with this, and I had to look it up. He asked me why 1/3 + 1/3 + 1/3 = 1, since it's equal to 0.333... + 0.333... + 0.333... which is 0.999... How can that possibly equal 1.000...? And is 0.66... equal to 0.67000...?

I didn't have a good enough answer for him, so I had to look it up and found this page. I tried to explain it to him but since I'm a terrible teacher and he's only 5, it was hard for me to convince him. Luckily he has many years before it matters!


> He asked me why 1/3 + 1/3 + 1/3 = 1, since it's equal to 0.333... + 0.333... + 0.333... which is 0.999... How can that possibly equal 1.000...? And is 0.66... equal to 0.67000...?

This would make me very proud.


Yes, it's quite clever. An equivalent proof is dividing 0.999... by 9 using long division, which comes out to 0.111... which is equal to 1/9. Now use fraction notation and it simplifies to 9/9 = 1. Not quite as robust as the limit-based proofs but it's a quick answer and gets to the heart of the issue of repeating notation not capturing the whole picture.


Is this problem simpler than we want it to be? Meaning 1/3 is a concept stating there is 1 part of 3 total. If you have 3 total parts, added then it is a whole. Trying to shoe-horn it into the decimal system, similarly to try to represent pie as a clean number into the decimal system etc. Isn't the issue representing the number in one for and another, not the actual logic of the issue? idk


Why this is an issue, when 0.9999... is exactly 1?


I don't think it is directly. I was referring to the 1/3 comment but possibly it is related in how we are representing decimal numbers as irrational numbers. I had made another comment directly on that somewhere in this tree but it was more an intuitive one rather than a proof.


5 year old is curious or asking such question is mind blowing.


> Luckily he has many years before it matters!

and depending on career choices, it might never matter at all.


I asked my math teacher this when I was a kid. He told me to accept that's the way it is so I did.


Kinda like why 5 rounds up instead of down


When 5 rounds up, you can easily implement rounding as floor(x + 0.5).


But it doesn't. The almost-universal standard is to round so that the higher-order digit is even: 0.5 -> 0, and 1.5 -> 2.

https://stats.stackexchange.com/questions/218821/round-to-ev...


No .666666 is not equal to .6700000

0.666... is equal to 0.666...7


Not sure if you’re joking, but 0.666...7 is not a real number. Can you define it?


Can you explain what you mean with "real" in that sentence? Because in the context of maths, a real number is "a number in ℝ", which this absolutely qualifies for. Whereas in plain English the term doesn't really have a clear definition.

You might consider "real" numbers (in plain English) to mean physically measurable quantities, but there are plenty of numbers that we can write out because they're infinitely long, but trivially "made" (such as π, which just requires grabbing a compass and drawing a circle)

The bit you should be wondering about is why 0.666...6 and 0.666...7 are the same number: infinities cause digits written on paper (or a computer screen) to look like a kind of number that they're not. The two fractions (numbers in ℚ) 0.6666 and 0.66667 are 0.00001 apart, but the two reals (numbers in ℝ) 0.666...6 and 0.666...7 are 0.000...1 apart. That looks like a tiny tiny fraction, but it's not a fraction, it's an infinite number of zeroes, and thanks to that, this number, while it looks like a fraction, is just a silly way to write zero.

So thanks to infinities, the most-definitely-not-a-fraction number that we write as 0.666... is the same as the most-definitely-not-a-fraction number 0.666...(some numbers here). The difference between the two is zero.

Infinities are fun. And difficult. But also fun.


Infinities are difficult and your explanation is wrong. This notation .666...7 is meaningless. The notation .666... indicates a decimal representation that does not end. The representation has infinitely many sixes and does not end. If there is a 7 in the representation then it must be at a specific decimal. If it was at the end of the representation then it would be a finite decimal representation. The notation is meaningless with respect to the real numbers.


I don't agree with you, or with the sibling comment that claims that "infinity plus one" is "nonsense". It is in fact precisely well-defined, to a mathematician working in a framework that admits it and makes it worth discussing.

The blog post that (I think) started off this chain of posts, at https://blog.plover.com/misc/half-baked.html , has the most concise and lucid explanation I've found, so I'm just going to straight-up copy it: 0.666...7 is an “an object... said to ‘have order type ω+1’, and is completely legitimate.”

It's not very useful – it's exactly equal to 0.666... ! – but it's legitimate and well-defined.

My absolute favorite construction of objects like this is Conway's surreal numbers. These things appear perfectly naturally in the surreal numbers, and are completely well-defined, if (again) not very useful.


This reasoning is understandable, but also incorrect, and we can point to where it breaks down: the idea that the 7 is "at a specific decimal" doesn't hold true, due to that pesky ellipsis. The crazy bit about the infinite repetition is that the 7 in 0.666...7 is not at a specific decimal. It's not even at "the last decimal" because there is no last. So, let's show this via a proof by contradiction:

---

1. we assert that 0.666...7 is a sequence of digits.

2. we assert that each digit in a sequence can be assigned an integer index corresponding to its position in the sequence. I.e. we can defined each digit's index as "the number of digits that precede this digit".

3. from (1) and (2) it follows that the index for 7 must be an integer.

4. the ellipses represents an infinite number of digits (infinitely repeating the repeated digit pattern preceding it).

5. from (2) and (4) it follows that the index for 7 must be the integer value "infinity", because it has an infinite number of digits preceding it.

6. (3) and (5) cannot both be true, because infinity is not an integer.

7. from (6) it follows that (1), and/or (2), and/or (4) must be false

8. (4) is, by definition, true.

9. from (7) and (8) it follows that (1) and/or (2) must be false.

10. (1) is our fundamental assertion. If (1) was false then there is wouldn't even be a sequence of digits for us to reason about. So (1) is true.

11. from (6), (8), and (10) it follows that (2) must be false for there to be no contradiction.

QED

---

Now, certainly, for finite length numbers the assumption that each digit in a sequence has an integer index holds true, but it turns out we have mathematical notation that lets us write down numbers for which that property does not hold.

Infinities are fun. And difficult. But also fun.


> 1. we assert that 0.666...7 is a sequence of digits.

A sequence in the mathematical sense is a function whose domain is the natural numbers. Please define that function for the creature you're working with here. Otherwise you're trying to prove things about an object with no definition. You will end up in trouble.


Why? https://news.ycombinator.com/item?id=23009160 posited that we were in a situation where (1) holds, so that's where we're starting. If we assume (1) holds, then (2) cannot hold. We can abandon (1) but then we're no longer replying to that specific comment, now we're trying to prove something else.


My point is that it cannot be clear what assuming (1) entails when you aren't properly defining the quantities involved. You have to answer in clear and mathematical language what the quantity in (1) is defined as. What is the definition of "0.666…7"?

As it currently stands, assumption (1) is similar in nature to me saying "gnarfgnarf is an imaginary number". It's completely meaningles unless I define what I mean by gnarfgnarf.

So: what do you mean by 0.666…7?


No, it isn't. The idea that saying "gnarfgnarf is an imaginary number is completely meaningless" is the opposite of true: if you assert that gnarfgnarf is an imaginary number, that is the definition we'll be using for the remainder of whatever proof we use that in. Anywhere the proof now talks about gnarfgnarf, we're talking about something that is an imaginary number, and has to follow all the rules that imaginary numbers have to follow, without ever having to say which imaginary number it is, or further define it. It's "any" imaginary number, we just call it "gnarfgnarf" instead of "x" or "a + bi" or the like.

Same here: we have a number written as 0.666...7 using conventional mathematical notation. The comment that is being replied to asserts that this can be treated as a sequence, and so we start the proof with that definition: "0.666...7 is a sequence", and now we're done. You, as reader of the proof, have been informed that those nine symbols, in that order, for the rest the proof, represent a sequence. Not "a specific sequence", but "any sequence", and it must follow all the rules that sequences follow.

We then show that simply by being "a sequence", due to the properties of sequences, we get a contradiction. Our first assertion is the definition for the purpose of this proof, and is sufficient.


I'm really sorry, I jumped to conclusions as to what you were saying without properly reading the comment you linked to!


Yes, exactly, .666...7 means a number that has 6 at each decimal position and 7 for index k, where k is greater than any natural number. This exactly is .666... (just as parent commenter explained).


> Yes, exactly, .666...7 means a number that has 6 at each decimal position and 7 for index k, where k is greater than any natural number. This exactly is .666... (just as parent commenter explained).

Ill-defined. Try again.


It's as much nonsense as someone saying "infinity plus one is bigger than infinity!"


It's unfair to ask 'Can you define it?' to someone not educated in real analysis because they don't know which definitions you'll accept and which you won't. They already think that '0.666...7' is a valid definition.


I don't think it's a real number, but if it was this would be yet another intuitive proof that 0.999... = 1.


I'm pretty sure it's a real number in that it falls within ℝ (the set of all real numbers).


I doesn't, because 0.666....7 is not a valid representation for a number. The ... means goes on forever, and you cannot put something "after forever". It's not different than saying "0.0j" is a real number in base 10; it's not, that string does not represent a number in our number base 10 number system.


There’s not a way to define it, typically you’d define .666... as the sum of 6/10^n from 1 to infinity. This decimal representation does not terminate, so you can’t put a 7 at the “end of it” because there is no end.


You would define it as a digit sequence, using ω + 1 as the indexing set. I would consider it to be most naturally an element of the hyperreal numbers, although it is also contained in smaller extensions of the real numbers.


Sure, you could define some sort of number system that has an infinite point digit. I doubt that’s what the original poster had in mind though.


These number systems already exist, I’m not just making them up.

If someone says 0.666…7 then you have a couple different ways you can take the discussion. You can say, “No! Real numbers don’t work like that!” or you can talk about what number systems would look like if you can do that.

It turns out that there’s a lot to learn from the alternative number systems, including formulations of calculus without limits that are easier to understand from an intuitive perspective, yet equally rigorous. The field is called “nonstandard analysis”. It’s not taught in college.


An interesting consequence of this in proofs.

You’ll see various proofs involving real numbers that must account for the fact that 0.999…=1.0. There are, of course, many different ways to construct real numbers, and often it’s very convenient to construct them as infinite sequences of digits after the decimal. For example, this construction makes the diagonalization argument easier. However, you must take care in your diagonalization argument not to construct a different decimal representation of a number already in your list!


I never understood the fixation on diagonalization. Why can't ever exist another way for mapping any set to countables?


Diagnolization is a pretty deep argument about fixpoints, Godels incompleteness argument is essentially a diagnolization. So why wouldn't there be fascination?


The point is that diagonalization works whatever map you have come up with: no matter how you construct your list of the reals, you can come up with another real not on the list.


Flame wars over this used to be common on the internet. People intuitively have the notion that the left side approaches 1, but never actually equals it. They see it as a process instead of a fixed value. Maybe the notation is to blame.


The intuition is right, and the mathematical definition relies on the intuition. It's just that people haven't been exposed to the actual definition when it comes to real numbers.

Mathematically, mathematicians prove that there is a unique number that this process goes to, (and not, say, two distinct numbers), and define the notation to represent this unique number.


Repetition can easily be seen as a process, which would indeed approach 1. But I think the idea of infinite repetition is very hard to get.


The intuition that there is something in between isn't really wrong, it make sense and they work, otherwise physicists wouldn't be able to work with them. So that intuition is correct, it is mathematicians who just don't understand it fully yet. Maybe fully formalizing this is what unlocks the final piece keeping us from creating a unified theory in physics?


Nonstandard analysis is a rigorous framework for working with infinitesimals (and infinitely large numbers).

https://en.wikipedia.org/wiki/Nonstandard_analysis


I remember WarCraft 3 official forums being torn apart by this, with probably thousands of comments in the thread. Blizzard even had to post their official stance on the issue, but that didn't calm those who insisted 0.999... was 1 minus epsilon and not exactly 1.


I'm glad someone else remembers this. To this day, whenever I see 0.999... = 1, I think of the Battle.net forums inexplicably flooded with threads about it for what felt like ages.


Maybe the major source of confusion is that our decimal representation for whole numbers is supposed to be unique. Then when we extend it to rationals and reals this property fails at rationals in the form of a/10^n.

Arguably the sign symbol ruins it for whole numbers as well, as +0 and -0 could be equally valid representations of the number 0. We just conventionally don't allow -0 as a representation. There are other number representations that don't have this problem.


I think the source of confusion is that people can't cope with recurring numbers. When someone says .999...=1 the listener assumes that the 0.999... stops at some point, and if that happens it'll always be below 1 because they can imagine adding another 9. Essentially, people actually ignore the "..." because that's the hard part.


Right - I also find it easier to say that really, "1" is just a different/shorthand notation for 0.(9) It's not "two different, but equal numbers" - it's two different notations for the same number. Like how you can write same number in different ways in different bases - this is just writing the same number, in the "infinite number of decimals" vs "natural" way.


Arguably 1 is just as an infinite number of decimals as 0.(9) . 1 is just short for 1.(0) .


I don’t know, I don’t recall students having much trouble accepting that 1 = 1.0 = 1.000.


0.9999 = 1 is a consequence of the way we define rational and real numbers and limits. There are alternative definitions of numbers where this equality does not hold: Non Standard Analysis https://en.wikipedia.org/wiki/Nonstandard_analysis being the most famous one.

But for the sake of argument, let's just define numbers as sequences of digits with a mixed in period somewhere:

    MyNumber := {
      a = (a_1, a_2, ...) -- list of digits a_i = 0 .. 9; a_1 != 0.
      e -- exponent (integer)
      s -- sign (+/- 1)
    }
Each such sequence corresponds to the (classical) real number: s * \sum_i a_i * 10^{i + e}.

We can go on and define addition, subtraction, multiplication and division in the familiar way.

Problems arise only when we try to establish desireable properties, e.g.

(1/3) * 3 = 1

Does NOT hold here, since 0.9999... is a difference sequence than 1.000....

So yes, you can define these number systems, and you will have 0.999... != 1. But working with them will be pretty awkward, since a lot of familiar arithmetic breaks down.


1 = 0.9... is the consequence of purposely ambiguous and questionable notation. That's an old teachers' trick to make students talk and listen about mathematics.


This has nothing to do with notation. It's perfectly possible to define infinite sequences without using dots. In particular if they are constant. In the above case:

    a_i = 9 for i \in \IZ and i < 0
    a_i = 0 for i \in \IZ and i >= 0
Where \IZ are the integers.


This is 'more intuitive' if you think about it this way:

If any two real numbers are not equal, then you can take the average and get a third number that is half way between them. Conversely, if the average of two numbers is equal to either of the numbers, then the two numbers are equal. (this isn't a proof, just a way to convince yourself of this)

What's the average of .9999... and 1?


0.999…5 obviously.


But at which position that "5" is? Tell the number please.


Position 999…, duh.


Okay, now multiply that by 2.


0.999…


Or: if they're different, what is their difference?


0.000...1 obviously


Thinking of it this way actually shows you that x.999... (ad infinitum) is the same as y.000 (ad infinitum) if x + 1 = y.


Why not 0.000...2?


There is a nice characterization of decimal expansions in terms of paths on a graph:

Let C be the countable product of the set with ten elements, i.e. {0, 1, 2, ..., 9}. The space C naturally has the topology of a Cantor set (compact, totally disconnected, etc). Furthermore, for example, in this space the tuples (1, 9, 9, 9, ...) and (2, 0, 0, 0, ...) are distinct elements.

The space C can also be described in terms of a directed graph, where there is a single root with ten outward directed edges, and each child node then has ten outward directed edges, etc. C can be thought of as the space of infinite paths on this graph.

A continuous and surjective map from C to the unit interval [0, 1] can be constructed from a measure on these paths. For any suitable measure, this map is finite-to-one, meaning at most finitely many elements of C are mapped to a single element in the interval. For example there is a map which sends (1, 9, 9, ...) and (2, 0, 0,....) to the element "0.2".

The point is that all decimal expansions of elements of [0, 1] can be described like this, and we can instead think of the unit interval not as being composed of numbers _instrinsically_, but more like some kind of mathematical object that _admits_ decimal expansions. The unit interval itself can be described in other ways mathematically, and is not necessarily tied to being represented as real numbers. Hope this helps someone!


Ultimately this is more the definition of R than that it is a theorem. One can also work with sets of numbers in which the completeness axiom does not hold. E.g., sets of numbers in which one also has infinitesimals.


And this is why I prefer hyperreals.

0.999... = 1 - 1/∞

We talk about infinity all the time in mathematics, teachers use the concept to introduce calculus in a way that people can more easily understand, but using infinity directly is almost universally banned within classrooms.

Nonstandard analysis is a much more intuitive way of understanding calculus, it's the whole "infinite number of infinitely small pieces" concept, but you're allowed to write it down too.


I think what's important here is that if you're making that claim,

0.333... = 1/3 - 1/∞

Which implies 3/∞ = 1/∞


Apologies, I should have explained it differently.

0.999... implies a number infinitesimally smaller than 1. You wouldn't use 0.999... in a hyperreal system because you can represent it directly.

I shouldn't have mixed different systems and claimed they're mathematically equivalent, you've proven that doesn't work.


Agreed, I think the main lesson from this topic is that decimal numbers are a pain. Stick to integers & symbolic operations. You can spit out a decimal approximation at the end

Computers agree: never trust precision to floats


I'll just chime in with my completely ignorant theory that 1 - 0.999... = the infinitely smallest number, but is still, in my mind, regardless of any logic, reason, or educated calculations, greater than 0.

I understand and accept this is wrong. However, somewhere in my brain I still believe it. Sort of like +0 and -0, which are also different in my head.


So 0 and that number are two different numbers.

What is the difference between them?


One is positive and the other is negative. I thought that was pretty obvious!


Well, if you perform the same calculation in base12, then you'll get a whole number representation, because 12 / 3 is 4. Thus, in base 12, 1 / 3 = 0.4

The problem here is our language for mathematics. Just like you have to accept the silent "k" on the word "knife", even when it doesn't make sense, in math, you have to understand that rational numbers can't always be expressed accurately as decimals.


Just to be clear, as I said, I totally accept the correct answer. I was just trying to humorously point out the non-intuitive nature of it.


Philosophically you are right.

Mathematically you are wrong.

It is indeed true that if there was an entity that could reason beyond infinity 1-0.999... would be greater than 0.


>It is indeed true that if there was an entity that could reason beyond infinity 1-0.999... would be greater than 0.

This seems like a common thread in the comments on this article, and I don't quite understand it.

Humans can reason about infinity just fine. We have a hard time picturing it, but it's overall a pretty simple concept: It never ends.

So there's no such thing as being able to reason "beyond infinity", because beyond infinity doesn't exist. Its very existence is precluded by the definition of infinity.


From the perspective of IEEE-754, they’re pretty close ;)


Usually the concept of a limit, which assigns a meaning to 0.999..., isn't studied until calculus.

There are approaches to mathematics that avoid infinite constructions, and a "strict finitist" would not assign 0.999... a meaning.

The stunning success of limit based mathematics makes finitism a fringe philosophy.

Remember, class, for every epsilon there is a delta.


Professor N.J. Wildberger is probably among the most well known "ultrafinitist" on YouTube.

https://www.youtube.com/watch?v=WabHm1QWVCA

I mention him because I would think he sympathizes with those who have concern over the meaning of this kind of notation.


Wildberger is great. His lectures that he teaches at UNSW (i think) are interesting, and he usually keeps a clear dividing line between std math and his own predilections. It threads the line between being a kook and legitimate published mathematician very finely.

I actually have some sympathies with his contention that real numbers (limit points of infinite series) are somehow a different animal than rational numbers. But it might be easier for me to go there because practically all numbers on computers that we work with are rational, floating point values. On the other hand, it seems like a philosophical distinction in the end because you can fully order them both on a number line.


If I give you two representatives of real numbers, say turing machines that write out on their tape the binary digits of those real numbers, in general you will not be able to order them.


Certainly not non-computable ones, but presumably they lie somewhere regardless of my inability to do it on a TM. Which presumably gives rise to all the weirdness uncountable infinities give you.

I guess I shouldn't phrase it as "you can fully order it". :D Zermelo's theorem at that point right?


Even computable reals do not have computable ordering.

> but presumably they lie somewhere regardless of my inability to do it on a TM.

Why?

> Zermelo's theorem at that point right?

It is declared by fiat in standard set theory that infinite sets can be well ordered. This is no real mathematical justification. The real justification is social: that it is convenient for mathematicians to not care about the ontology of these nasty infinite objects so long as results are mostly reasonable for objects that mathematicians actually care about. You don't get into too much trouble pretending the reals are nice so long as you don't look too hard.


I mean that's Wildberger's whole point isn't it?


Yes, I'm just trying to emphasize that there is real serious mathematics behind his point, it's not just a matter of philosophical taste.


But on computers, you get things such as

console.log(0.1 + 0.2)

// 0.30000000000000004

A mathematician might say that this shows that you do not really have accurate floating point values and arithmetic in your computer, but instead something close to it.


That's a coincidence of the particular number system you use, and many programming languages have multiple number implementations.

Racket starts with arbitrary precision rationals.


0.999...=1 is true in the mathematical sense, period.

However as a representation of physical world, there is a caveat. What we understand is physical world appears and behaves discretely, because at planck scale (approx. 10^-35) the distances seem to behave discretely.

Although common people don't know/ understand planck scale, they do grasp this concept intuitively. What they are really saying is that in physical world there's some small interval (more precisely, about[1 - 10^-35, 1]) which can't be subdivided further, based on our current knowledge.

Same thing applies to planck time (approx. 5 * 10^-43) too.

So people are arguing two different things - the pure maths concept, or the real world interpretation.


The thing that helps me "understand" it is that the universe has finite sizes of things like the Planck length for example being a theoretical thing at the smallest distance I would imagine. Now imagine it going smaller than the Planck length (finite) in terms of the difference of .9 repeating and 1 since infinitely small differences can do that. Essentially there is no way to tell the difference between .9 repeating and 1 then from a practical or theoretical perspective of measurement. So not imagining infinity lets us at least imagine smaller than the smallest measurable thing.


>The universe has finite sizes of things like the Planck length for example being a theoretical thing at the smallest distance I would imagine

The Planck length might or might not be a physical limit of the universe. We don't have any specific proof that it's the smallest, just that we will not be able to observe any of that size or smaller. To look at something, we need to use light, and we must use a wavelength smaller than the details we wish to resolve. For something of the Planck length or smaller, this ultimately results in a photon that would have more energy in that area than can exist without a black hole forming... so one does, which then prevents us from measuring it, much less anything smaller.

Space and time might very well be discrete and not continuous - certainly the Loop Quantum Gravity folks would agree there. But there are widely supported theories that take both sides.

(I tend to lean towards them being discrete, but I would hesitate to call myself even an amateur hobbyist when it comes to theoretical physics...)


I hate to say it but I still don't believe this, it just goes against all intuition that I have, but people much smarter than I have proven it so I take it on faith for doing things like calculus etc just my lizard brain won't let me accept something that looks like less than 1 being 1 the same way that the limit of 1/x as x goes to infinity is zero but it doesn't seem like ti should be. The number gets infinitesimally small but it's still some non-zero number -- I dunno this is probably proving my ignorance it's just what it is.


Don't feel too bad.

The notation "0.999..." looks non-threatening, which tricks people into believing that they understand what it means. We could make "0.999... = 1" look scarier by writing it as [n ↦ 1 - 10^(-n)] = [n ↦ 1], where [n ↦ a_n] denotes the equivalence class of a Cauchy sequence of rational numbers. These statements mean the same thing, but with the scarier notation much fewer people would mistakenly believe that they understand what it says.

I would expect mathematics majors to learn what 0.999... means during their undergraduate university courses. But then there's still the question of why mathematicians chose to define it that way. To really understand that, you need to be able to come up with alternative definitions and to investigate the consequences of those definitions. And for most undergraduates, it might still take a few years to build that level of mathematical maturity.

For anyone who is not a math major, I certainly don't want to discourage any curiosity about this subject. Just don't be discouraged if you feel you can't fully understand what's going on. Understanding what 0.999... means and why mathematicians chose to define it that way is quite subtle.


Thank you. I had totally expected to get dunked on and find your empathy refreshing.

I’ll keep trying to understand it.


It has to do with infinities (infinite sums) which might explain why it’s so interesting


Maybe try this: do longhand division of 7 by 7 or 9 by 9 or whatever, but instead of putting 1 in the first place, start with 0.

        0.99
    7 | 7.00000
        0
        7 0
        6 3
          70


I follow that but then I could see it just continues to .9999 forever never converting to 1 it’s always that tiny fraction less than 1...


I think if you're trying to "prove" this using axioms, you've already lost.

The problem isn't that you can't come up with axioms to convince people you have a proof - the problem is with people not understanding that 0.99999.... is not a number - it's one representation of an abstract entity called a number.

The problem is, the maths required to actually define the concept of a number is fairly complicated, so it's hard to explain to someone why all of these axioms make sense in the first place.


Can someone help me out here with least upper bounds?

Generally the proofs of .9...=1 rely on the fact there is no number that exists that can be between .9.. and 1 and therefore .9... is equal to 1.

.9... is the least upper bounds of the set. My question is if .9... was removed from the set what would be the new least upper bounds. Another way of asking the question is if we define it in this context doesn't any set bounded by a real number have a least upper bounds and aren't all real numbers equal to each other?

Thanks!


I think this is a notation and definition problem. To me, it behaves differently in `Y = 1 / X`, which distinguishes quite strongly between `X = 1 - 0.9999` and `X = 0.9999 - 1`! If 0.9999 ought be exactly equivalent to 1.0, there ought to be no difference between `1 / (1 - 0.9999)` and `1 / (0.9999 - 1)`.

To me, 0.9999 indicates a directional limit, which can't necessarily be evaluated and substituted separately from its context.


Luckily your taste doesn't factor into whether it's true or not :-)


I'm actually curious what impact it would have on various proofs if 0.999... wasn't accepted as 1.

What gets broken? What consequences do we hit?


Arithmetic breaks, as multiplication is no longer the inverse of division. (For example, 1/3 * 3 = 0.999… would no longer work.)


why?

1/3 * 3 could still be equal to one. but 1/3 != 0.33333... that is, 1/3 is not representable in base 10. Which makes way more sense.

I wonder if taking 0.9999.. != 1, that is 0.0000...1 exists would allow us to reslove, the fact that some possible events have probability 0?


The crux of the matter is that you have to define what things like "0.3333..." mean in the first place. Any reasonable definition of it as a representation of a real number is going to lead to it being equal to 1/3.

If you want to redefine it explicitly as not a real number, you can do that, and maybe even get to some amusing math that way, but you're no longer talking the same language as the rest of the world.


>"but you're no longer talking the same language as the rest of the world"

yes, in the standard real numbers 1 = 0.999.., but people have dealt with numbers like "pi" and "sqrt(2)" before the standard real numbers were defined.

Hence the question, if we define such a system such as 0.333... != 1/3. what are the consequences?

by 0.3333... I mean a countably infinite sequence of 3s.


I think an important distinction is that in those "old days", people were largely working in what we now know to be subsets of real numbers, and the same conclusion applies there.

If you want to go to supersets of real numbers, you may be interested in https://en.wikipedia.org/wiki/Surreal_number


If 0.333... != 1/3, then they are different numbers and the expression 1/3 - 0.333... must have some value different from zero. What is that value?


> but 1/3 != 0.33333...

But the issue is that this is easy to verify experimentally via (in this case infinitely) long division that you can do by hand. So it’s hard to convince people of this.


but the long division algorithm never terminates.

why would it terminate at countable infinity?


You can show that the long division algorithm is looping. Further, you can show that it will continue, for these inputs, to produce `3`s forever with no change in state. How could `...` be defined such that it wasn't 0.333...?


Limits would break, which might break or at least cause problems with calculus and other branches of math.


So, lets take .9, .99, .999 and so on. If a sequence of rational numbers converges, it converges to a real number. What number does .9, .99, .999, .9999 (and so on) converge to? Which is to say, is there a number that it gets closer and closer to at every step? Clearly it gets closer and closer to 1 at every step, so the sequence converges to 1.

This is one of the many (equivalent) ways the real numbers are defined to begin with, https://en.wikipedia.org/wiki/Construction_of_the_real_numbe...

There are lots of other ways to define sets with operations, but they won't be anything at all like the normal numbers you are used to.


I don’t think this reasoning is correct, because the limit of an expression does not have to be the same as the value at the point the limit is being taken. For example, if your sequence is “sin(1/x) * x“, then it’ll slowly appear to converge to one as x approaches infinity, but it cannot reach it. So there’s really no relevant conclusion you can make.


You're taking a map from R -> R, I'm taking a sequence of discrete values. It's not the same thing.

However, I was a bit fast and loose, the sequence .9, .99, .999, .9999 also gets closer and closer to 2 but it doesn't converge to 2, I should have said if you have a metric || and some number X such that for any d, there exists an N such that |X -An| < d for all n > N, then the sequence A converges to X. But I wasn't trying to write a proof.

https://en.wikipedia.org/wiki/(%CE%B5,_%CE%B4)-definition_of...


You can multiply 1/3 by 3 and not get 1.


I have a pizza. I divide it into three parts.

You'd be asserting that if I eat the three parts I have not eaten the whole pizza.

I'm unconvinced.


You lost (1 − 0.999…)th of a pizza to crumbs when you divided it.


Get a sharper knife.


I'm not a mathematician, but I would guess that the Surreal numbers developed by John Conway, do contain values that start with an infinite sequence of 0.9_ but "end" with something that makes them != 1.

https://en.m.wikipedia.org/wiki/Surreal_number


What if you have 0.9̅4? Can we say 0.9̅5 > 0.9̅4 > 0.9̅3? More on what happens if you allow this: https://mathwithbaddrawings.com/2013/08/13/the-kaufman-decim...


That's fun but it's not clear how interesting those numbers are compared to real numbers, which have turned out to be pretty interesting over the years


>What if you have 0.9̅4?

Well, you fundamentally can't. If the 9s go on for forever then you never reach a point where you can add the 4. The definition of infinity precludes anything after infinity, because it never ends so you can never get there.


In this alternate number system you can think of it as a tuple: (.9̅, 4). Which is larger than (.9̅, 3) and smaller than (.9̅, 5).


That is, until Cantor showed otherwise


Are you saying that Cantor showed you can have 0.9..4?

I'm not even remotely an expert here, so I might certainly be wrong, but I don't understand how Cantor's theorems show an ability to stick a finite number and stop on the end of an infinite number.


Yes, one thing Cantor showed is that it is somehow meaningful to define numbers like infinity+1, where there are an ordered infinity of elements followed by one more element. Sets like this are called ordinals

https://en.wikipedia.org/wiki/Ordinal_number

So you could, if you like, define 0.9...4 to be a bunch of digits indexed by the ordinal ω+1. However the thing you have now defined isn't really a representation of a real number any more, unless you just ignore all the bits after the ... I guess.


Huh, interesting. Looks like I've got some more reading to do.

Thank you!


A dumb consequence of the axiom of choice? The reals are like a membrane with no atomic pieces. You can move in either direction infinitely and you can zoom in infinitely without reaching any “Planck unit” so to speak. So what does it even mean to pick out a “real number”? To me anything built on this concept is nonsense.


Sorry for my naivety, but why one couldn't prove by induction that adding 9s never close the gap, or let's say, that by definition the operation is such, that it never closes the gap. If you can always halve the pie, then you can continue eating forever. To me it would be much easier to accept that (1/3)*3 is not 1.


Other people have pointed out that induction never makes the jump from a finite number of 9s to an infinite number of 9s.

I feel the easiest "proof" is a proof by contradiction.

First hopefully we can agree that if we have two real numbers x and y that are not equal then we have a number z, such that x < z < y. The easiest example is z = (x + y)/2.

If 0.999...!= 1 then there must exist a number A, such that 0.999... < A < 1.

Now since 0 < A < 1 (as 0 < 0.999...) it should be easy to see that A's decimal expansion is of the form 0.abcdef... . Since, A != 0.999... one of the digits in the decimal expansion of A has to be something other than a 9.

For instance, we might have A = 0.99998999... .

However, this would mean A < 0.999... as all digits other than 9 are smaller than 9 [1], but this contradicts our initial assumption that 0.999... < A and since we're dealing with strict inequalities both can't be true at the same time! Thus no such A should exists and thus 0.999... = 1.

Now this isn't a rigorous proof, the thing that makes me most uncomfortable is the bit that I state A < 0.999..., but I'm uncomfortable because A might have multiple decimal expansions and I don't know how the algorithm in [1] interacts with that, however, if someone quibbles about that bit of the proof for that reason I feel they should have already accepted that 0.999... = 1 via another rigorous proof.

[1] If this is not clear think about how you would compare two decimal expansions to see if one is smaller than the other. You go through every digit until you find one that is different between the two numbers and then you compare those.


"If 0.999...!= 1 then there must exist a number A, such that 0.999... < A < 1."

Thanks for the reply, but I don't know how you can make the above claim, because to me the question of what we mean by 0.999... is intertwined with the claim itself. I mean to me it seems to be a matter of how you interpret the approach to infinity. I don't see why there has to be A in between, if you interpret 0.999... as the "biggest possible number below 1", as then there would be also a "difference of the smallest possible amount" between those numbers approaching 0, but not quite getting there. But then again, if it's by some fundamental definition (limit) that 0.999... = 1, then ok.

I'm slightly embarrassed I don't know more mathematics, but I'm trying to learn some more...


That statement is based on:

> First hopefully we can agree that if we have two real numbers x and y that are not equal then we have a number z, such that x < z < y. The easiest example is z = (x + y)/2.

It's one of the properties of the reals and rationals that if two numbers aren't equal, then there are infinitely many numbers between them. It doesn't work with the integers, 2 and 3 aren't equal, but there's no integer x, such that 2 < x < 3.

So if we say 0.999... != 1 then that means 0.999... < 1 and then that means (0.999... + 1) / 2 is a number different from 1 and different 0.999... but that lies between them.

But this is modern mathematics. In the past mathematicians dealt with "infinitesimals", especially in the early days of calculus, but I think they were discarded because they were confusing and also not necessary in favor of limits. This is where I think where some of your confusion is coming from. Infinitesimals don't exist in the real (and therefore rational) number space, but the concept exists for other "weirder" number systems.

According to the wikipedia page "This repeating decimal represents the smallest number no less than every decimal number in the sequence (0.9, 0.99, 0.999, ...)"

The clearest resolution to "if you interpret 0.999... as the "biggest possible number below 1"" is that in the reals and rational this concept doesn't exist. There's no biggest number smaller than x and ditto for smallest number bigger than y. (The distinction from the wikipedia definition is the difference between < and <=, <= exists, but < doesn't)

It's similar to saying you can't divide by 0. Sure in some cases you can define it, but doing so causes many issues and costs you so much that it's not worth it.[1] Another example is 1 not being prime, there's no reason for it not to be prime, but it's just much more convenient to say arbitrarily that it's not prime.[2]

The other thing that might confuse you is a proof by contradiction, I know it certainly confused me the first few times I saw it. I'm happy to help you if this is tripping you up too.

> I'm slightly embarrassed I don't know more mathematics, but I'm trying to learn some more...

No problem, we all start knowing nothing :)

[1] https://www.youtube.com/watch?v=BRRolKTlF6Q [2] https://www.youtube.com/watch?v=IQofiPqhJ_s


Thanks :). I think the resolution to my problem is moving away from infinitesimals. I did learn some calculus, linear algebra, number theory etc. in university, but I did it quite superficially, since it didn't feel that relevant to me at that time. Now I feel different.


Let's try a naïve inductive proof of 0.999… ≠ 1.

  Base case: Given 𝑎⁰ = 0, 𝑎⁰ ≠ 1.
  Inductive case: Given 𝑎ⁿ⁺¹ = 1 - (1 - 𝑎ⁿ) / 10, 𝑎ⁿ ≠ 1 ⇒ 𝑎ⁿ⁺¹ ≠ 1
This proves that for every 𝑎ⁿ = 0.999…9, there's an 𝑎ⁿ⁺¹ that's a 9 larger and still different than 1, which is similar to your halving the pie example. However, you can see that it always happens that 𝑎ⁿ⁺¹ > 𝑎ⁿ, so the "last" infinite 0.999… is not part of the sequence of the inductive case.

My intuitive way to see this is that infinitely repeating decimals are an abomination that breaks the nice property of decimal notation where each number contains a single representation (without zeroes at the beginning or at the end of the decimal). Fractions are the one true way to represent rational numbers.


It's because the rigorous mathematical definition is more subtle. It's defined as the smallest real number such that repeating this process (putting more 9's in the end) can't result in a number that's greater than it. So as long as you have a number that's smaller than 1, there's a gap there, and repeating the process of adding 9's will eventually give you a number that lies in that gap.


Yeah, it makes sense if the whole foundation of mathematics is laid out in such way that the above has to be true (kinda circular, but still), and this would mean that what I'm talking about is actually not mathematics but something else :).


By induction, you can prove that no _finite length_ sequence of 9s after the decimal reach one. It has nothing to say about the infinite limit, though.


Induction can be non-intuitive to people who are troubled by this problem.


Induction has nothing to do with this. It's about a taking the limit of a process.


Is 1 a prime number? No, because we define it not to be. Why do we define it not to be a prime number? That's the real question.

Is 0.999... = 1? Yes, because we define decimal numbers to behave that way. Why do we define them to behave that way? That's the real question.


The best way to think about it is that 1 - 0.999... = 0.000...

The result of 1 minus 0.999... is 0.000 with zeroes that go to infinity. And I think its easier to reason that 0.000 with repeating zeroes forever is in fact equal to zero.


This can be solved by using base 12 rather than base 10 to do the calculation...


Firm believer that adopting base 12 would have had a ripple affect on society, preventing many trials tribulations and wars. Pity we only have base 10 and donald trump


base 12? Them are fighting words...I call WAR!


We haven't even started the Pi Tau argument yet


Yes it does but it seems like a less correct way of writing it. Like you could represent the number 10 as 10/1 (ten over one) but why would you? Why would you represent 1 as .9 repeated?


Basically an infinite series of 9's just means they're all maxed so the .999... = 1 makes sense to me (kind of anyway).


# on Python (3.7.4) 1 == 0.99999999999999994448884876874217297882 # but 1 != 0.99999999999999994448884876874217297881


Or any other language using IEEE 754 64 bit floats


I see... Nice!


Perhaps the natural discomfort many face when confronted by this challenging formulation instead indicates that a limitation of the real number system has been perceived? I would encourage those who have this reaction to study hyperreal and other alternative systems as mentioned in the article. If this clicks for them they may help lead us in new direction mathematically and advance the state of the art.


This comes from the fact that most people don't learn limits. Once the definition of the decimal representation is understood to be in terms of a limiting process, the meaning becomes clear.

For most purposes hyperreals are too much machinery when learning and manipulating limits would be simpler.


Why do we accept .999... as a valid notation. Why not only allow 1 to denote this concept?


> Why do we accept .999... as a valid notation. Why not only allow 1 to denote this concept?

That would be adding a special rule for purely cosmetic reasons. It's typically not done in mathematics, where concise rules are usually more cherished than special-casing things.


I'd have figured "approaches 1 from the left" would be more accurate.


So is then ‘0.(0...)1 = 0’?


So 99.999..% of the speed of light is just the speed of light?


Yes.

How much faster than 99.999..% the speed of light would you need to go to get as fast as the speed of light?

If your answer is 0.00..1% the speed of light, this answer is nonsensical because infinity never ends, and you never ever have the ability to add that 1 at the end. So, the only answer can be 0.00.., and 0.00.. = 0, so 99.99.. has to = 1.


Hmm, This reminds me a lot like Zenos paradox.

Also, I’d like to understand how if we can say 99.999... is = 100, wouldn’t we also be able to say 99.9999888999... = 99.9999..., and therefore also just 100? And so on?


> Also, I’d like to understand how if we can say 99.999... is = 100, wouldn’t we also be able to say 99.9999888999... = 99.9999..., and therefore also just 100? And so on?

You've just introduced a novel symbol into the discussion. What is the definition of 99.9999888999...? Before anyone can answer what it equals, you must define it. Recall that if the digits repeat, we already have a definition, so that case is fine. Your case is not à priori well-defined.


Given infinity that doesn’t make sense.


What is the largest number smaller than 1?


0, if we're talking natural numbers or integers. There isn't one if we're talking real numbers. Simple proof:

Assume x is the largest number smaller than 1.

(x + 1)/2 is a number larger than x but smaller than 1. Our assumption that x is the largest number smaller than 1 must be wrong. QED.


How about the expression:

    0.9999... < 1
And consider that if a < b then a != b.


Both are false. 0.99999... is not less than 1. It is the same as 1.


Yes, because someone defined it that way.

It is because the "limit" in

    0.999... = lim[eps->0] 1-eps
is implicit and defined as being applied before anything else. But you might as well define that implicit limit as applying over the entire expression.

UPDATE: So instead of interpreting the expression as:

    (lim[eps->0] 1-eps) < 1
which is indeed false, you can also interpret the expression as:

    lim[eps->0] ((1-eps) < 1)
which is true (assuming that -> denotes a limit from above). Note that here the "lim" has been taken out and acts over the entire expression.


> which is indeed false, you can also interpret the expression as: > lim[eps->0] ((1-eps) < 1)

Assuming you don't mean some special notion of limit, I would guess that by `(1-eps) < 1` you mean the function from the reals to `Y = {false, true}` that is defined as sending `x` to `true` if `1-x` is strictly less than `1`, and `false` otherwise. Let's call this function `f`. I assume you're endowing `Y` with the discrete metric?

If so, `f` does indeed have a limit from above that is `false` and a limit from below that is `true`. Where do you wanna go from here?

Edit: Corrected stupid wrong assertion about limit from below, d'oh.


Have a look here:

https://www.wolframalpha.com/input/?i=Limit%5BSign%5B1-x-1%5...

This shows that the limit does exist from both sides (but is different from both sides).


Oops, sorry, that was a stupid blunder. Thanks for the correction.


> If so, `f` does indeed have a limit from above that is `false` and a limit from below that is `true`. Where do you wanna go from here?

That's the other way around. From above you get true, and from below you get false.

Note that 0.999... represents the limit from above for 1-eps. Hence the result is true.


> That's the other way around. From above you get true, and from below you get false.

Yeah, my bad.

> Note that 0.999... represents the limit from above for 1-eps. Hence the result is true.

I mean you can define 0.999… like that if you want. You'll get 1. So what? Your detour via `f` provided nothing.


It's still not true. No no sense is it true. It is true that 0.999... = lim[eps->0] (1 - eps), but it is ALSO true that lim[eps->0] (1 - eps) = 1. That's how limits work.

If you accept both of those two (which you should, because they're correct), then since equality is transitive, 0.999... = 1. Therefore it is not less than 1, it is equal to 1.


I think you should look more closely at what I did with the order of operations, and the fact that "<" now is part of the expression acted over by the limit.


Limits are defined for functions and "1 - eps < 1" is not a function, it's an inequality. But even if we pretended that limits were defined for inequalities, there only reasonable interpretation of it is that:

    lim[eps->0]( 1 - eps < 1)
is equal to

   1 - 0 < 1
which is obviously false. Again though, limits are defined for functions, not inequalities. If you have the limit

    lim[eps->0] (x - eps)
Then it is equal to

    x - 0
which is equal to x.

You're simply wrong here. You can read that wikipedia article if you don't trust me, or you can watch any of the thousands of youtube videos of mathematicians explaining this to you.

This is one of those things that is hard to do. Once you have an idea in your head that you're sure is right, it is incredibly difficult to dislodge it. It takes an enormous amount of humility and intellectual flexibility. But it is healthy and good to do it, every time you do it, you become a better person for it.


You know, inequalities can be interpreted as boolean functions. The function under the limit produces consistently true as you approach the limit. (Of course one-sided one; the case where epsilon is getting smaller).

It isn't a valid argument for 0.999... != 1 but an interesting one nevertheless.


It is not reasonable to interpret the notation that way. For one, variable binding becomes incoherent.


I think it is perfectly reasonable for two reasons. One is that (when it works) the notation becomes more intuitive, as in

    0.9999... < 1
And the second reason is that mixing implicit limits (or series or infinity) and numbers is a mathematical hack, and when the notation doesn't work it is a clear indication that something is wrong.


> But you might as well define that implicit limit as applying over the entire expression.

What is this supposed to even mean?


It's false.


"In other words, "0.999..." and "1" represent the same number." - Ok, I'm done with this world.


I don't think that 0.999... = 1 is actually provable. I think this and all of calculus is actually axiomatic, which has the following axiom:

Given ε = 1/∞ then: ε = 0

Am I wrong in thinking this way? It seems as though there's no way to actually truly prove that an infinite series converging towards zero actually hits zero (from a constructivist pov)


That's not an axiom we take; the study of real numbers takes some algebraic and order axioms, and a completeness axiom:

Any non-empty set of real numbers with an upper bound has a least upper bound.

We can't prove your statement

Given ε = 1/∞ then: ε = 0

because it is not well-defined, but we can prove this:

If ε ≥ 0 and, for every natural number n, ε < 1/n, the n ε = 0.

For suppose there exists an ε which is a counter-example, i.e. ε > 0 and ε < 1/n for every natural number n. Then the set

S = { x : x a real number, x > 0 and x < 1/n for every natural n},

is non-empty, and has an upper bound (e.g. 1). So it has a least upper bound, say y. In particular, y is an upper bound, so 2y is not in S. It is > 0, so there must exist a natural number N for which 2y >= 1/N. But then y/2 > 1/4N, and 4N is also a natural number. So for any element x of S, x < 1/4N < y/2; thus y/2 is an upper bound which is less than y.

This is a contradiction, so the claim is proved.


Why is your proof for every natural number n and not infinity (ω)? also isn't the law of excluded middle axiomatic?


A mathematician would quibble with your notation, but you're basically right. The fact that 0.999... = 1 depends on the fact that 0 is the only number smaller than 1/n for every integer n. This isn't exactly an arbitrary axiom, it encodes some of our natural intuitions about what a number is, but you can construct a system known as the "hyperreal numbers" where it isn't true and 0.999... doesn't converge.


Well, that wiki page has several proofs using various approaches. There's also a section which addresses common objections.

I think you'd have to dismiss them all to make your claim (to do it well, that is).


Afaict all of these rely on the axiom of Archimedes which is what I'm talking about, some of the proofs explicitly stating it and others implicitly. Unless I missed something.


I think the mistake in your thinking is that infinity is a value or reachable destination. It's a concept and it behaves differently than a number.

Also go read Generatingfunctionology and Concrete Mathematics


That's the point I'm trying to make actually. 1/infinity is an infinite series, which would be a computation that takes infinite time to compute. Saying that 0.9999...=1 is saying that the infinite series is the same as that concrete value, which don't actually know and can't prove.


I think the hiccup here is the notion that an open form expression doesn't have a closed form representation, which is not always the case. Closed forms exist for infinite series and vice versa.

1 - sum{k=1}^{\ifnty} 10^-k is one example.


Reminds me of the paradoxes of Zeno [1], especially the paradox of Achilles and the tortoise.

At least one can simply prove that 0.999... = 1 without much hard work. Maybe less controversial than the following:

    1 + 2 + 3 + ... [somehow] = -1/12 {{Riemann's zeta(-1)?}}
    1 + 2 + 4 + 8 + 16 + ... [somehow] = -1
As well as the weird prime product (Product of 1/(1-(p^-2)) for p prime) and the sum of x^-2 from x=1 to [sigh] being equal to (pi^2)/6 are some example of infinite beauty of mathematics that I remember.

[1]: https://en.wikipedia.org/wiki/Zeno's_paradoxes


It's less controversial than those because it's actually true. The latter statements are not true with the traditional definition of a sum.


1 + 2 + 3 + ... does not equal -1/12, except if you redefine what you mean by +. https://www.youtube.com/watch?v=YuIIjLr6vUA


Your two examples can be debunked though.

See here: https://www.youtube.com/watch?v=YuIIjLr6vUA


ah, Mathologer video.

Have seen that.

Another one by 3b1b on that topic: https://youtube.com/watch?v=sD0NjbwqlYw


> ...infinitely many 9s...

How about we prove that an infinite number of 9s is impossible?

Assume that we have a finite number of 9s. Add a 9. The result is not infinite. Add another 9. The result is still not infinite. We can repeat this process for an infinite amount of time and still not have an infinite number of nines.

Any process that can not be completed in a finite amount of time can not complete and can not have a valid result based on that completion. Any process that can not be completed in an infinite amount of time is also bogus, but is in a sense even more bogus.

Added: Note that this is different than the case where we are asked to contemplate infinity with respect to continuous functions. By defining the number of 9s as a discrete (integer) value it opens things up to a discrete argument. These pointless navel gazing exercises always end up as a war of what everyone things things are defined as.


> We can repeat this process for an infinite amount of time and still not have an infinite number of nines.

Isn't this somewhat ill-defined? You may have a finite number of 9s at any one point when adding more 9s, but there is no point "at infinity" where you can stop and look at how many 9s you've added because by definition there are still more 9s to add.

It's like trying to prove infinity is impossible:

> Assume that we have a finite number. Add 1 to that number. The result is not infinite. Add another 1. The result is still not infinite. We can repeat this process for an infinite amount of time and the number is still not infinity.

Sure, individual numbers "on the way" to infinity are not infinite, but that doesn't necessarily disprove the existence of infinity.


> We can repeat this process for an infinite amount of time

Well, no, you cant. How about I prove it to you:

Assume that we have a finite number of processes. Repeat the process. The number of processes is still not infinite.

/s


I tried this route as a counter to Cantor's diagonal argument, and got chastised by my then professor. I hope you have better luck, as I was never able to convince myself otherwise.


By that point you were already working in a universe where infinity was in some sense real. The universe of mathematics is different than the universe we live in.


> How about we prove that an infinite number of 9s is impossible?

First you need to define what you even mean by this statement. The rest of what you wrote makes no sense either.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: