Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
A number that can't be calculated (amolas.dev)
41 points by alexmolas on Oct 25, 2022 | hide | past | favorite | 76 comments


I mean, almost every real number is not computable. If you stick your hand into a grab-bag of reals, you will grab an incomputable number with probability 1. As another commenter mentioned, the computable reals are countable, and the reals are very much not.

There is a whole field of mathematics striving to see how far you can go, analysis-wise, with just computable numbers.[0]

[0] https://en.wikipedia.org/wiki/Computable_analysis


This is pretty trivial because given countably infinite sized Turing machines (for the machine instructions to be enumerable its size must be countably infinite) T and countably infinite sized number of iterations N, we have the mapping f(t, n) -> x where by definition all X are the computible numbers. Since all x have at least one corresponding pair t, n, then |X| <= |T| * |N|. As T and N are countably infinite, then X must also be countably infinite by the Cantor-Bernstein theorem


I copy an interesting paragraph:

"A notable result is that integration (in the sense of the Riemann integral) is computable. This might be considered surprising as an integral is (loosely speaking) an infinite sum. While this result could be explained by the fact that every computable function from {[0,1]} to R is uniformly continuous, the notable thing is that the modulus of continuity can always be computed without being explicitly given. A similarly surprising fact is that differentiation of complex functions is also computable, while the same result is false for real functions."

Are real functions not a subset of complex functions? Oh, if you redirect the output to R it gets harder, just like with finding zero roots.


I think you could get an idea some of the subtleties of this if we try to make what you mean be "subset" explicit.

Starting off on a set theoretic foot, what even is a function f:A->B? Well, it's a set of pairs (a, b) with a in A, b in B, such that (a, b) in f and (a', b) in f implies a = a'. In this sense, you could say a function r:R->R is a subset of some c:C->C by defining r = {(x, x') in c : x in R}

On the other hand, the set of functions C->C is a different beast entirely, because that consists of all functions whose domain is C, whereas the set of functions R->R is all functions whose domain is R. While each individual function in R->R is a subset of some function in C->C, R->R itself is not a subset of C->C.

I'm no analyst, complex or real, but my intuition is that C simultaneously has more wiggle room in the topological sense (you have a whole extra dimension) but less wiggle room in an algebraic sense (deg n polynomials are guaranteed to have n roots in C), it's pretty easy to just stumble upon surprising and unintuitive results.

Thinking back to my grad school complex analysis course, limits are fundamentally different in C. In R, you have two directions from which you can approach a point, but in C, any sequence converging on the point is a valid perspective from which the limit must make sense. You can approach the point from a straight line in any direction, or a squiggly line, or a spiral, etc etc. So this extra dimension of wiggle room puts a hell of a strong condition on what functions can even be differentiable in the first place, and this restriction on differentiability often translates to stronger results.


Don't think this article is correct when it says that Omega can't be computed to arbitrary precision, it certainly can be. While it's true that there is no algorithm that can compute BB(n) for all n, it is always possible to compute BB(k) for a specific and arbitrary k. This is a very subtle detail.

Similarly for Omega, while there's no algorithm that can compute Omega to an arbitrary precision, that is not the same as saying that there's some upper bound on the precision that Omega can be computed. For any precision k, it's possible to compute Omega up to that precision. The issue is that whatever algorithm you devise to compute Omega up to a precision of k, that algorithm will fail to compute Omega for some other precision q > k. You will need a fundamentally different algorithm for q > k, but such an algorithm does exist. You'll also need a fundamentally different algorithm for precision r > q, but that also exists... ad infinitum.


> Don't think this article is correct when it says that Omega can't be computed to arbitrary precision, it certainly can be.

It's not computable in this sense: there's no algorithm that given n returns Omega with n bits of precision, for any n. I think this qualifies as saying it cannot be computed to arbitrary precision :)

But I understand your point -- the language here is tricky.

Trying to make things clearer: Every program that does halt can be proven to halt -- simply run the program until it halts, if we assume it halts then you get its proof for free. However, if we make no assumptions a program may run forever, so we cannot know if it halts. I believe Godel's theorems also imply there must be non-halting programs for which no proof that they do not halt exists. This closes a very interesting loophole: you might, instead of just trying to run every program until it halts, to check every proof to see if it is a valid proof that a given program P does not -- if it exists, you will always find it. But because sometimes P does not halt yet no proof exists, this loophole is closed.

As others have mentioned, indeed I believe 'different algorithm for q > k, but such an algorithm does exist' does not pan out in the sense that we could prove the correctness of such algorithms, even though they do exist! :)

Another interesting angle: you can think that, even if you had an "oracle" that gives the "true" halting value (halts or does not halt), the size of a program that decides (by encoding the oracle values somehow) all halting programs up to n must grow with n. In other words, the halting values are incompressible somehow; if it were the case that the growth was bounded, then there would exist some program P that returns the true halting value of any program (false by Turing). This leads to the notion that non-computable numbers are random (or random-like), because they are fundamentally incompressible in a certain way (and you can't, on average, compress a random string).

Edit: minor corrections


This makes me consider a second loophole: even with, for a given set of axioms, we cannot prove some given P does not halt (Godel's incompleteness theorems). However, that makes us wonder if there can be a procedure that suggests stronger, but still minimal axioms (consistent with say Peano arithmetic or ZFC) and proves them in this new axiomatic basis?

(1) Would this proof be considered acceptable? (or can we devise a system that guarantees found proofs to be acceptable)

(2) Is this procedure feasible?

If anyone has literature on this topic, this is interesting! (an optimistic alternative to incompleteness)


> it is always possible to compute BB(k) for a specific and arbitrary k

This is not true.

There is a number N where BB(n) is incomputable for all n > N. This fact tie back to being able to solve the Halting Problem, in various clever ways if you knew BB(n).

The value of BB(744) depends on whether the Riemann Hypothesis is true or false. This means that it cannot be computed in general. The Riemann Hypothesis being either true or false are both consistent with ZFC. The value of BB(748) depends on the self-consistency of ZFC.


> it is always possible to compute BB(k) for a specific and arbitrary k

How do you do that once k is large enough that it admits Turing machines whose halting behavior is independent of our axioms?


Perhaps you need larger and larger sets of axioms as k increases.

This is fascinating to me, because Turing machines can in principle be manifested as physical objects. It feels like you shouldn’t need axioms if you have the thing sitting in front of you, but on the other hand how else are you supposed to prove something doesn’t halt?

Still, if you have two axiomatic systems that disagree, what does that mean? Surely you can just run the machine in question for a certain number of steps to determine which system is right.


Ah, the issue is that one system may be able to prove the correct answer but it remains independent for the other system.


BB(k) for a specific k is just a number. Ao just write it to the output. Proofing correctness is different from „computing“ and not required here.


> BB(k) for a specific k is just a number.

That's not as obvious as you might think. The value of BB(800) is independent of ZFC. So it isn't "just a number" if you're working in ZFC.


Hmm... As ZFC specifically cannot identify that number as special (at least as relates to the Busy Beaver function), maybe it is quite specifically "just a number"!


Ah, so pointing out that the number itself, being an integer, is computable even though the function isn't. Makes sense.


Nice observation. Unless you’re missing something that seems to kind of shoot down the main point.

The post and it’s analysis are still interesting, but it should probably be updated else a lot of people might take away this key point into their memory as they think about computability.


> it is always possible to compute BB(k) for a specific and arbitrary k.

It is not. There are a few proofs, most notably the fact that you'd have to solve the halting problem to find BB(k) for an arbitrary k. But the more interesting proof is when looking at the growth of the number of states required for larger BB(k)'s. You end up with a pretty cool contradiction[1] by constructing a very special Turing machine.

[1] http://www.coopersnotes.net/docs/langmach/CHAP10%20Busy%20Be... (pg. 403)


I feel like even after I point out the subtlety, people just read right past it and still don't recognize it.

Nothing in your source contradicts anything I've said.


> it is always possible to compute BB(k) for a specific and arbitrary k. This is a very subtle detail.

That's not subtle at all. Any finite number or string can be computed by a program consisting of a single very long print statement.


Fair enough, one misconception I always hear from people learning about Godel's incompleteness theorem or the halting problem is that it means there exist algorithms where we can never know if it halts or not. Like there's some limit to math that makes it impossible for us to know whether a particular algorithm just keeps running forever or will eventually stop.

No such algorithm exists, no such example exists because for any particular algorithm it's always possible to determine whether it halts or not (at least in principle). It might not be practical, the proof might be incredibly difficult, but math does not have any kind of intrinsic limitation that prevents us from proving whether a specific algorithm halts.


No, you cannot compute BB(k) for a given k. https://scottaaronson.blog/?p=2725


I love that article but that does not state what you are claiming it does. What that states is that there exists a k (whose upper bound is 8000) whose solution is independent of ZFC. That does not mean it can't be computed.

In the very comment section of that blog post someone brings this issue up, and Scott himself says:

"Yes, fine, you’re right, there’s not a single “boundary between the knowable and the knowable.” What there is, is more like a concentric series of boundaries, with signs warning you that as you proceed outward you need to take on bolder and bolder axioms (which might be large-cardinal axioms and might be something else). What’s significant about going beyond ZFC is just that, by that point, you’ve clearly passed the first of those boundaries."


Suppose BB(8000) could be computed: 1) Compute BB(8000). 2) Run the program specified in the article until it either halts or it runs for BB(8000)+1 steps. 3) We then know if the program ever halts, and then have a proof within ZFC whether or not ZFC is sound.


No, what you will have is a proof that ZFC is sound but that proof will not be within ZFC. That proof will rely on additional axioms that are extensions to ZFC.


Well, OK, I suppose you could add something like 'BB(8000) is 6,' but once you go beyond ZFC, you're going beyond anything required for computation as we know it.


No this is also wrong. You would not be able to add an axiom like 'BB(8000) is 6' or just any arbitrary value without introducing an inconsistency. The reason for the independence of BB(8000) from ZFC is precisely because there is some non-standard model of arithmetic for which a Turing Machine halts in that non-standard model but does not halt in the standard model.

The only axioms you could introduce to ZFC that would not introduce an inconsistency would be one that eliminates that non-standard model of arithmetic without also eliminating the standard model of arithmetic.

https://en.wikipedia.org/wiki/Non-standard_model_of_arithmet...

None of this requires going beyond any kind of notion of computation and any value that is computed for one such extension would necessarily have to be the same value among all consistent extensions of ZFC. It's not like you could have one extension of ZFC where BB(8000) is X and another extension where BB(8000) is Y without one of those extensions being inconsistent.

There are plenty of extensions to ZFC that add new axioms for the sake of exploring niche mathematical ideas. ZFC is nice in that it's easily motivated and can serve as a well understood foundation whose proofs can be stated without reference to additional hypotheses. It also satisfies an unbelievably broad set of mathematics, but mathematicians studying set theory often extend ZFC with additional axioms such as large cardinal axioms, Neumann/Bernays/Godel (NBG) set theory is another common extension, Kelley Morse (KM) set theory. The latter two extensions are even able to prove the consistency of ZFC. As I referenced in Scott's quote, BB(8000) is almost certainly computable by adopting one of the large cardinal axioms.


> it is always possible to compute BB(k) for a specific and arbitrary k

but wouldn't this be equivalent to solve the Halting Problem?


No, because the halting problem requires implementing a function that can compute whether any other function halts. It has to work for any possible function you give it. You can, of course, compute whether some functions halt. Two easy ways to do so are to observe that it halts or observe an infinite loop.


But if you know BB(n) for any n then you can just make run a program of length n for BB(n) steps. If after BB(n) steps it hasn't halted you know that the program will loop forever. Since you can do that for any arbitrary program then you could solve the halting problem.


Isn’t the crux of the issue that you can’t “observe” an infinite loop?


No, the crux of the issue is that you can't observe every infinite loop.


If you recall from predicate logic, these two statements are not the same:

1) For all Turing Machines A, there exists a Turing Machine M such M can compute whether A halts.

2) There exists a Turing Machine M such that for all Turing Machines A, M can compute whether A halts.

Statement 1 is true, statement 2 is false.

You can apply a similar reasoning to BB(n) or your notion of Omega. For any particular choice of n, there is an algorithm that can compute BB(n) and similarly for any arbitrary precision p, there is an algorithm that can compute Omega up to that precision.

None of this is to be taken that there exists an algorithm that can compute BB(n) for all n, or that there exists an algorithm that can compute Omega up to precision p for all p.


But imagine that we could compute BB(n) for any n, even if we need to use a different algorithm for each n. Then, we could build the mapping n -> BB(n). But the existence of this mapping is forbidden by the unsolvability of the Halting Problem, isn't it?


No you can't conclude this and in fact the more general principle is that being able to prove P(0), P(1), P(2), etc... does not mean that it's also possible to prove P(n) for all n. The property of a formal system for which this does follow is known as Omega completeness, but Peano arithmetic and ZFC more broadly are both Omega-incomplete systems, hence there are propositions for which P(0), P(1), P(2)... can all individually be proven and yet it is not possible to prove P(n) for all n.

Wikipedia touches on this subject with an article on omega-consistent systems, which is a very closely related property:

https://en.wikipedia.org/wiki/%CE%A9-consistent_theory

And this gives a very brief description of Omega completeness:

https://encyclopediaofmath.org/wiki/Omega-completeness


> But imagine that we could compute BB(n) for any n, even if we need to use a different algorithm for each n.

This is itself uncomputable; therefore

> Then, we could build the mapping n -> BB(n).

This mapping is not computable.

Edit as I'm rate-limited:

What I mean is, informally, the incomputability of the algorithm to select the correct algorithm to compute BB(n) is sufficient to make that mapping uncomputable.

However, that does not mean that an algorithm to compute BB(n) for a specific n does not exist. Trivially, it does; "output BB(n)". It's just very uninteresting! What you're asking for, rather, is more like "an algorithm to prove BB(n)=x", which is a closely related but distinct problem.


I’m not sure I buy the argument that you can say that we know the halting problem is unsolvable and therefore we can’t calculate BB.

The halting problem is unsolvable due to a contradiction, but there is a finite set of programs with given number of symbols; one or more of them share the max non infinite running time, and if you did have the oracle to answer that, you can in fact run the algorithm provided, except for how long this would take in real time.

A different take is that you can write a program that will only terminate if ZF set theory is false, which is impossible to prove using math due to Gödel's incompleteness theorems. We assume this program should run forever, but would not be able to prove it. That program provides some upper bound to the highest BB number we could ever hope to prove with certainty: 748 according to this article from a few years back.

The problem of the post is that an all knowing oracle doesn’t have this problem, and assuming the program has finite length, our being able to prove whether it halts or not does not stop there from being an answer to that question.

https://www.quantamagazine.org/the-busy-beaver-game-illumina...


Thanks for the link, it was an interesting read!

I'm not arguing that you can't calculate BB numbers, in fact we've calculated some of them. What I'm arguing is that you can't calculate all the BB numbers, since this will mean that the Halting Problem is solvable. Using the results of your link I guess we can say that BB numbers can be computed if n<748 (or <20 according to Aaronson).

> In 2016, he and his graduate student Adam Yedidia specified a 7,910-rule Turing machine that would only halt if ZF set theory is inconsistent. This means BB(7,910) is a calculation that eludes the axioms of ZF set theory. Those axioms can’t be used to prove that BB(7,910) represents one number instead of another, which is like not being able to prove that 2 + 2 = 4 instead of 5.

OMG! A number that's so much big that can be contained in the ZF set theory.


> if you did have the oracle to answer that

I think you are correct, but the question is simply if we can compute this maximum number - without any oracles.

Then the answer will be no, since we could decide the halting problem as shown by the author.


Yes, an all-knowing oracle knows the busy beaver numbers.


I don't think so. If that was the case, then for any program we could compute its length n, and then ask the all-knowing oracle the value of BB(n). And knowing BB(n) for any n we could solve the Halting Problem, but we know that the Halting Problem is unsolvable. Therefore we must conclude that such an all-knowing oracle can't exist.


Yes, that’s my point. The all-knowing oracle could also just tell you whether or not your program halts.


I think it’s much simpler than that. If you remember your maths well enough to remember the proof that the reals are uncountable and the rationals are countable, consider that the list of programs that produce a number is countable. You don’t need to check if the program halts or not, even if we “count” the “numbers” from the programs that don’t halt _it’s still countable_. But we’re still left with an uncountable set of numbers for which we can’t even write a program.

Easiest way to construct one is to use the diagonal argument that’s used to prove, amongst other things, the halting problem.


Ah, I wrote the same proof above. I agree this is a much simpler way of proving the existence and size of the uncomputible numbers. The busy beavers problem is just an example of an interesting case


yes, I didn't think about it, but you're right! However, I liked the idea to have a constructive way to build an uncomputable number


You have an assumption that each number needs a unique program to compute the digits. You'd need to prove that statement first.


By definition, each program only produces the digits of one number. It's okay for more than one program to produce the same number. Am I misunderstanding your point?


I didn’t realize the initial state of the tape was part of the definition of a Turing machine. I was picturing having the same operations applied to different tapes, but that’s not the definition so I’m wrong.


How can the same program produce multiple different results?


> Even if God himself came from Heaven, he could not compute these numbers. Uncomputable numbers are numbers that can’t be computed because of maths limitations.

I like to think that God has access to much better math and computational resources than we can even begin to conceive of. ;-)


There are numbers that god can't compute, but that supergod can compute. Of course, that also means there are numbers that supergod can't compute, but that superdupergod can compute. And from there it's turtles all the way up.


> since we don’t know Σ(n) we can’t know the exact value of Ω.

Hmmm. Just because you don't know the individual terms of the sequence, it doesn't follow that you don't know their infinite sum. Here's a trivial example: take the sum `(-1)^n Σ(floor(n/2))`. The infinite sum is obviously 0, yet we can't calculate each of the terms, nor many of the partial sums.


> Just because you don't know the individual terms of the sequence, it doesn't follow that you don't know their infinite sum

Yes, this is not true in general. But I think it's true for the specific definition of Ω we use in the post.


Intuitively I'd say you're probably right. But computation theory is full of counter-intuitive results. Your statement might be provable by adapting the argument that Σ(n) is not always computable. Something like computing the sum would require solving some version of the halting problem.

Perhaps add a small edit to your article to highlight that this particular statement isn't to be taken as a mathematical proof?


Something about that proof feels wrong.. Are you sure the allusion to the halting problem applies here?

The proof that the halting problem can't be computed relies on a self-reference where you query whether the program halted and then do the opposite. But- and here's the important part- when the program references itself it requires at least one more symbol to do so as it needs somewhere to store the answer. It therefore needs an infinite stack. But we have a fixed number of symbols. Or, to put it another way: if your set of symbols is bounded, then you can only call functions that use at least one fewer symbol than the calling one, which means you will eventually run out of symbols and have a base case. Doesn't that mean the halting problem doesn't apply to the busy beaver problem?


I feel the article is mixing up S(n) (max number of steps) and Σ(n) (max number of 1s written), going by the wikipedia article of BB https://en.wikipedia.org/wiki/Busy_beaver

Notably the list of BB numbers in the article matches the values of S(n) in wikipedia instead of Σ(n). Also this sentence makes more sense if you replace Σ(n) with S(n)

> If after these steps the program hasn’t halted it means it’ll never halt (by the definition of the Σ(n)).


Oh yeah, you're right, I mixed up the terms. I just started learning about this topic and I got excited and wrote the post. I'll fix it ASAP. Thank you very much for spotting it :)


I'm unfamiliar with BB(n), so this raises an interesting question for me. How do we know that BB(n) exists for all n? This is relevant to this post because the author's definition of omega relies on BB(n) being defined for all n. Obviously there are numbers that are not computable, but it seems one needs to be careful that the numbers discussed are sure to exist.


> How do we know that BB(n) exists for all n?

What would it mean for BB(n) to "not exist"? Recall that BB(n) is defined in a very concrete way: you take all the Turing machines of size n (which are easily enumerable combinatorial objects), remove ones which do not halt, then take the largest runtime of the ones that are left. Since we can always write down a Turing machine which halts immediately, we know that there's a trivial lower bound for each BB(n), and since the Turing machines are deterministic, we know that any machine that halts must do so in a fixed number of steps. With such a straightforward construction, it's hard to see how such a number could "not exist," at least not without appealing to an ontology somewhat outside of the mainstream (e.g. ultrafinitism).


In the proof you just provided, you have a step "remove ones which do not halt". By which process do you do that? By the halting-problem, you can't actually make that selection. This ties back into the fact that BB(n) is not computable for all n. I guess what I'm looking for is a proof that the number exists without violating the halting problem. Intuitively, we feel BB(n) should exist because there are finite turing machines with n states. But I'm not sure how to express that rigorously. Or maybe relying on a solution to the halting problem doesn't matter in terms of a proof.


The Halting Problem forbids you to build an algorithm that proves if a generic problem halts or not. This doesn't forbid you to check if a specific algorithm halts or not. For example, you know that the specific program `print("Hello World")` halts. So you can remove the ones that do not halt by inspecting them one by one and developing a specific algorithm for each one that determines if it halts or not.


If you could inspect any Turing machine and devise a specific algorithm that determines if it halts, then you yourself would be an algorithm that solves the generic halting problem.


no, because you can build an algorithm that fools the algorithm that solves the generic halting problem. I wrote a "proof" of that in my blog [1]

[1]: https://www.amolas.dev/blog/halting-problem/


Exactly. So your claim that:

> So you can remove the ones that do not halt by inspecting them one by one and developing a specific algorithm for each one that determines if it halts or not.

Is impossible. You can’t, in general, inspect Turing machines one-by-one to determine if they halt.


You don’t need to show a process.

Trivially a non-halting program for any n exists. Also trivially there must be some non-halting program with the highest number of steps.

We can’t necessarily find that program, but there is almost by definition some non halting program with the highest number of steps


> Also trivially there must be some non-halting program with the highest number of steps.

That isn't trivial. Whether a given program halts might be independent of ZFC, or of any consistent logical system you might try to use. The question of whether such a program "really" halts is one which might not have an answer. ZFC claims there must be an answer, because LEM, but why should we care what ZFC or first order logic or anything has to say on the matter, if they can't actually tell us whether it halts?


> By which process do you do that?

It's not a constructive proof, you don't need to give a process.


That's true. The part that "feels" weird is that there is no algorithm that could perform the separation into halting / non-halting subsets. Choosing elements from a set based on uncomputable properties almost feels like an extension of the axiom of choice.


> How do we know that BB(n) exists for all n?

It's trivial to construct a candidate for each n, which gives a lower bound.


> If after these steps the program hasn’t halted it means it’ll never halt (by the definition of the Sigma(n))

This argument doesn't quite work since Sigma(n) is the maximum number of characters printed before halting rather than the maximum finite number of steps of computation. The argument would work for the other flavor of Busy Beaver though.


cool series. In my opinion, Chaitin constant is the best example to understand incomputable numbers, hopefully author can include it later.


Thanks for your comment! Indeed, Chaitin constant is an amazing example, however I failed to understand how can it be computed at least approximately. What I like about the example in the post is that you can compute some digits of the $\Omega$ constant, which in my opinion is more pedagogical.

If you have some resources regarding Chaitin constant I would be very happy to read them :)


Chaitin computed the first few bits of a LISP based version of Omega in one of his books. For my own lambda based version of Omega I was able to compute the first 4 bits [1].

[1] https://tromp.github.io/cl/Binary_lambda_calculus.html#Halti...


I'll look some videos from a while ago. Again, great series, hope you keep publishing!


haha thank you very much! I'm happy you enjoyed it :)


The code he gives, and the explanation of the busy beaver problem, strikes me as pretty poorly explained. If I didn't already know about the busy beaver problem, I don't think I could get it from this explanation.


hi, author here! Thanks for your comment. Which where the parts that are poorly explained? I would love to get some feedback from you. I just started learning about computability theory, so maybe I explained some concepts wrong. Also, english is not my first language, so perhaps I lost something in translation.

Thanks :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: