Perhaps you need larger and larger sets of axioms as k increases.
This is fascinating to me, because Turing machines can in principle be manifested as physical objects. It feels like you shouldn’t need axioms if you have the thing sitting in front of you, but on the other hand how else are you supposed to prove something doesn’t halt?
Still, if you have two axiomatic systems that disagree, what does that mean? Surely you can just run the machine in question for a certain number of steps to determine which system is right.
This is fascinating to me, because Turing machines can in principle be manifested as physical objects. It feels like you shouldn’t need axioms if you have the thing sitting in front of you, but on the other hand how else are you supposed to prove something doesn’t halt?
Still, if you have two axiomatic systems that disagree, what does that mean? Surely you can just run the machine in question for a certain number of steps to determine which system is right.