Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Having an educational background in physics I find the Turing Machine a much more intuitive model of computation than say lambda calculus. To me this is Turing’s main contribution: linking the abstract world of computation to the physical world, and proving that a very simple physical machine can perform any computation (Turing completeness). That’s no small contribution.


> and proving that a very simple physical machine can perform any computation (Turing completeness)

This is a misunderstanding of the Turing machine model. The Turing machine is not designed to be a realistically implementable physical machine, and indeed there are no details in Turing's paper on how such a physical machine could be achieved.

Instead, the Turing machine model is designed to be a mechanistic model of what a mathematician does (in rote day-to-day tasks at least). The tape is a representation of the notebook where the mathematician writes down notes. The read head is a representation of the mathematician reading from the notebook.

It's fascinating to read the paper because of this, since it spends quite a few paragraphs showing that this simplification doesn't lose anything of the mathematician's work. It spends some time noting that even though paper is two-dimensional, it can be losslessly compressed on unidimensional tape. It spends time noting that writing/reading one symbol at a time is a good enough approximation for human writing/reading. It spends time noting that the next step doesn't need to depend on more than the current symbol + internal state, as human attention is also focused on a limited number of symbols.

This is actually why Turing's paper is so convincing on the argument of universal computation - not because the machine is realizable, but because it's hard to invent anything that a mathematician does while calculating that is not captured by the Turing machine model.

I very much recommend reading the first part of the paper [0] to see this argument (the second part, where it is proven that this abstract machine can in fact compute all known numbers, is more technical and flew over my own head).

[0] PDF https://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf


> This is a misunderstanding of the Turing machine model. The Turing machine is not designed to be a realistically implementable physical machine, and indeed there are no details in Turing's paper on how such a physical machine could be achieved.

I’ve read the paper. I think we just take different things from it, possibly because you have a background in mathematics?

To me, the main takeaway (if I imagine reading it in 1936) is that a universal Turing machine is not all that complicated, and arouses the “I could build this thing”-intuition.

That of course doesn’t mean that Turing intended it to be realizable, that’s not my point. But he appeals to an engineer’s intuition. It’s that intuitive link that’s valuable and unique IMHO.

BTW, I think your takeaway is probably clearer in Gödel’s work.


The Turing machine has a tape of unbounded size so can’t be built simpliciter.

Moreover although it turns out that that model of computation is very robust and sufficient for all purposes in physics (unless black holes or something allow hypercomputation) Turing does not really definitively show that and in a way that can’t be definitively shown. All we have is a lack of counterexamples (admittedly a very convincing one.)

I don’t see why this intuition is that helpful generally either; Turing machines don't really help at an implementation level with modern engineering problems as far as I can tell. Most of the time you know that what you want to do is possible in finite time &c.—presumably the difficulty is doing what you want to do, and going via the Turing formalism would be a bit odd.


> The Turing machine has a tape of unbounded size so can’t be built simpliciter.

On the contrary, I think this is one of the advantages of Turing’s model: I can imagine standing there in my garage looking on as my universal Turing machine is running low on tape on the left side, and then simply attaching a new roll of fresh empty tape at the end, holding it as it is fed into the machine. :)

It’s simply the least leaky abstraction of computation.


Exactly this. Unbounded doesn't mean infinite, and people are sometimes confused by the distinction.


Including me. What is the difference?


Some parts of mathematics deal with infinite sequences, that is, actually infinite lists of numbers. It's usually assumed, and important for analysis, that these numbers are considered to be "all there" right from the beginning. You can do operations like: Compute the limit. Add up all of its elements. Determine whether two sequences are identical for all entries after the trillionth.

I think this is often part of the misunderstanding when you stumble into a post by someone who's confused about 0.999... = 1. People sometimes write things like: "0.999... only moves closer and closer to 1, it never reaches 1." I think that highlights a deeper point than people usually give these comments credit for. The thing is, 0.999... doesn't "move" anywhere, it's considered a completed object right from the beginning.

Anyway, the point is that Turing machines are not like this at all. They only look at a fixed-size part of the tape during each step, from this follows that they have only used a finite amount of tape at each point of their execution.

So for any given (halting) computation, you don't actually need an infinite tape, you just need "enough", without changing the result. This is important because it makes Turing machines a model for practical computers. For example, the device you're reading this on has gigabytes of tape, and that's big enough for many, many, many kinds of computation.


In a theoretical sense, an unbounded number is always finite.

In a practical sense, turing machines don't voraciously consume tape. Adding extra feet of tape gives you an exponential increase in what you can compute. So if you set up a program to be reasonably judicious with its tape use, you can just say that if it reaches an end you pause it for a day, head to the shop, and buy another reel. Big computations take a lot of time anyway.


Any given number is always bounded. I am not sure it makes sense to talk about an unbounded number


The size number of the unbounded tape size is always finite, is that better?


The usual difference is just predicate ordering -- (1) for every program there exists a tape big enough vs (2) there exists a tape big enough for every program. In the first case, each individual (valid) program can get by with a tape of _some_ fixed length, but there's no bound on how big that requisite length might be. In the second case, since the tape requirements can be arbitrarily high you would need a legitimately infinite tape to service all possible programs.

IMO the given example muddies the waters a bit by conflating the conceptual tape a given machine is running on (which might be infinite for non-halting programs) with the physical tape you've used so far (for which it suffices for such tape to be finite at any fixed moment in time, though the amount needed might grow unboundedly as time increases).


Note that tape usage typically depends on the input, so I would distinguish programs and computations (program + input).


I'm with you, I also found Turing's argument that his machine model captures all of computation very convincing and pointed that out in another thread.

However, for this argument to work, we need to accept both that all computation is captured by Turing machines, and also that what Turing machines do is in fact computable. In essence, Turing machine <=> realizable machine. Maybe some people are more impressed by one, others more by the other direction of that double implication.


> the Turing machine model is designed to be a mechanistic model of what a mathematician does (in rote day-to-day tasks at least)

Or more accurately, what human computers did in those days (i.e. the rooms full of people algorithmically working out numerical calculations for e.g. physicists or whatever without understanding what they were doing beyond the mechanical steps they were taking). In other words a formalization of so-called effective methods.


This dualism in CS still carries on to this day. Essentially, the lambda calculus is the forefather of the functional approach to computation, while the Turing machine epitomizes the imperative paradigm.


> and proving that a very simple physical machine can perform any computation (Turing completeness)

Not proving, conjecturing. It's not proven until this day: https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis


It can never be “proven” because the notion of “any computation” being referred to is informal.

Also, it can’t perform any computation, if we say hypercomputation is a form of computation. Hypercomputation is (as far as we know) physically impossible, but so strictly speaking are Turing machines - a true Turing machine has unlimited storage, and an unlimited amount of time in which to complete its computations - any Turing machine you could physically construct would only be a finite approximation of a real one.


Ah, ok, I should have said “any computation that can be performed by any Turing Machine”.


So okay yeah it's Turing Completeness that matters the most to me as computer science, on a purely pragmatic basis: When people are pitching the latest whiz-bang doodad, the answer is always, "This isn't doing anything we couldn't already do, so how does it make things easier and do it efficiently enough for our purposes?" That's the proper question when it comes to monads, coroutines, actors, non-blocking blah blah, etc. etc.

That's really important in an industry saturated with hype, elitism and general nonsense. Anything I can do in Rust, you can do in Assembly, so I've got some explaining to do (I can probably succeed in this example, others maybe not).

If Turing actually failed to deliver the goods on "completeness", I'd really like to resolve that.


Back to undergraduate days > a decade ago, I think I learnt both lambda calculus and Turing machine at the same class: Formal Language and Automata Theory.

Of course Turing machine is more easy to understand because... it's a theoritical machine, after all. On the other side, lambda calculus was weirder, I didn't get it until learning Haskell :D


> I find the Turing Machine a much more intuitive model of computation than say lambda calculus

I think register machines are more intuitive than Turing machines - they are much closer to how real world computers work.


But that's the opposite direction from Turing's intention. The point of the Turing machine model is for the machine to be both mechanically plausible (no magic required) but also equivalent to what a human does when performing computation.

The Turing machine model is a mechanically plausible abstraction of a human performing a computation by following some steps in their mind and with a notebook. The tape stands in for the notebook. The read head stands in for the human's eyes. The write head stands in for their pencil hand. The internal state of the machine stands in for their mental state. At every step, you either read a symbol from the notebook tape and change your mental state in relation to this symbol, or write a symbol based on the current mental state. The procedure itself can be written on the tape, and you can occasionally refer back to it.

The original paper spends a good few pages working out this metaphor and showing that the machine model perfectly abstracts the mathematician's work of computation.


Yes, on the abstract computation side of the link register machines are much more intuitive.

But on the physical side of the link they are much less intuitive IMHO: it’s much less clear that “this is just a machine that I could build in my garage”.


The Turing machine is definitely not some machine that you could build in your garage. None of the mechanisms are specified or even specifiable. The important part, to Turing ateast, is that it perfectly matches what a human does while computing a number, and that there are no magical steps like 'thinking'. Read symbol, change internal state, write other symbol down, rinse and repeat.

All of the details of how a symbol is read, recognized, how it alters the internal state, how the next symbol is chosen, or even how many symbols you actually need are not mentioned and even considered irrelevant. Turing wasn't building one of these, he was proving that this model captures all known computation, and that even so it is undecidable whether this machine would ever stop for any arbitrary computation.


No “the Turing Machine” isn’t a machine you can build in your garage. It’s an abstraction.

But any individual Turing machine is. Building a simple one is not very hard, and you can imagine supplying it with more and more tape as it needs it.

It’s thus the only model of computation that I can fully imagine “working”. And that to me is the beauty of Turing’s contribution.


The “physical side” was probably more important when Turing first came up with the idea, and people struggled to conceive of computers because none had yet been built. Nowadays it is arguably less necessary because they are an essential part of everyday life, and most people learn some programming before learning theoretical computer science.


In the days where you can buy a ram chip, a register machine is a really easy abstraction.

If you're trying to imagine something you can mechanically assemble out of discrete compoonents, it's not so great. You need an unlimited number of components hooked up in complicated ways.

A turing machine is a fixed-size and relatively simple box, plus a long tape that feeds through.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: