Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Perhaps you need larger and larger sets of axioms as k increases.

This is fascinating to me, because Turing machines can in principle be manifested as physical objects. It feels like you shouldn’t need axioms if you have the thing sitting in front of you, but on the other hand how else are you supposed to prove something doesn’t halt?

Still, if you have two axiomatic systems that disagree, what does that mean? Surely you can just run the machine in question for a certain number of steps to determine which system is right.



Ah, the issue is that one system may be able to prove the correct answer but it remains independent for the other system.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: