Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We've been over this. Gödel's mu-recursive functions were a poor model of computation because it's completely unclear how to physically implement the arbitrary-function minimization operator. So people didn't see how to build a machine that calculates this way. Similarly, there's no clear way how to mechanize lambda calculus.

Turing Machines, on the other hand, were instantly obviously mechanizable. It was clear that one could build a physical machine to run any Turing program without human input. By proving that they could simulate the other systems, Turing showed that those other systems could be mechanizable as well.

I don't understand why Schmidhuber continues to ignore this crucial point.



Jürgen Schmidhuber has a track record of complaining about unfairness in the conventionally accepted view of the scientific record, especially in the English-speaking world. For instance, he has claimed for years that his own early research on AI has been unfairly overlooked or ignored by a "conspiracy" (his word, not mine).[a] At one point, the NY Times called him the "Rodney Dangerfield of AI research" because he doesn't get the respect he deserves.[b]

Ultimately, as always, credit will be assigned by future generations of scientists and historians long after everyone who is alive today has disappeared from the face of the earth. Many disagreements that seem important now will be forgotten. Many findings that seem important now will turn out not be. Many names that are very prominent now will fade into obscurity. As always.

[a] https://people.idsia.ch//~juergen/deep-learning-conspiracy.h...

[b] https://www.nytimes.com/2016/11/27/technology/artificial-int...



>Jürgen Schmidhuber has a track record of complaining about unfairness in the conventionally accepted view of the scientific record, especially in the English-speaking world.

So, he has

>Ultimately, as always, credit will be assigned by future generations of scientists and historians long after everyone who is alive today has disappeared from the face of the earth.

Well, credit for the field of CS is already applied now, and also yesterday and 20 years ago. It's not like this future credit would be the first such, or that it would be more accurat (or that it would not feed of the current popular views).


> But the iniquity of oblivion blindly scattereth her poppy, and deals with the memory of men without distinction to merit of perpetuity... Herostratus lives that burnt the Temple of Diana, he is almost lost that built it... Who knows whether the best of men be known? Or whether there be not more remarkable persons forgot, than any that stand remembered in the known account of time?

~ Sir Thomas Browne, Hydriotaphia (1658)


Likewise, people dispute that Ada Lovelace was the first programmer, because Babbage and Menabrea had previously created a few simple example programs.

But that downplays her accomplishments too much. She didn't write the "first program" but she was the first to understand what computers would be capable of doing (for example, that by assigning numbers to letters and symbols, computers could do more than simply perform numerical computations), and she was the first to invent foundational control flow structures such as loops and conditionals. Her program was much more rigorously defined and sophisticated than any previous examples.

>The longest program that Menabrea presented was 11 operations long and contained no loops or branches; Lovelace’s program contains 25 operations and a nested loop (and thus branching).

https://twobithistory.org/2018/08/18/ada-lovelace-note-g.htm...

https://writings.stephenwolfram.com/2015/12/untangling-the-t...

https://projectlovelace.net/static_prod/img/Diagram_for_the_...


I really enjoyed Stephen Wolfram's mini-bio of her.

https://writings.stephenwolfram.com/2015/12/untangling-the-t...

I very much recoginized from that that she had the attitude and experience of a "programmer," so I would say she was the first programmer, in the modern sense.


Wow, thanks for the link. Really interesting story, fascinating to think about what could have been if she hadn't died so young.


"It is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of the Analytical Engine. In considering any new subject, there is frequently a tendency, first, to overrate what we find to be already interesting or remarkable; and, secondly, by a sort of natural reaction, to undervalue the true state of the case, when we do discover that our notions have surpassed those that were really tenable.

The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with. This it is calculated to effect primarily and chiefly of course, through its executive faculties; but it is likely to exert an indirect and reciprocal influence on science itself in another manner. For, in so distributing and combining the truths and the formulæ of analysis, that they may become most easily and rapidly amenable to the mechanical combinations of the engine, the relations and the nature of many subjects in that science are necessarily thrown into new lights, and more profoundly investigated. This is a decidedly indirect, and a somewhat speculative, consequence of such an invention. It is however pretty evident, on general principles, that in devising for mathematical truths a new form in which to record and throw themselves out for actual use, views are likely to be induced, which should again react on the more theoretical phase of the subject. There are in all extensions of human power, or additions to human knowledge, various collateral influences, besides the main and primary object attained." -- Ada Lovelace, 1842 http://www.fourmilab.ch/babbage/sketch.html


Ada Lovelace's father was Lord Byron?! TIL


Sadly for her she never knew him. The parents split up, he left England and she stayed with her mother. Byron died when she was just eight years old.


Look for Sydney Padua's comics for a lot of weird and strange facts about Lovelace and Babbage. (To be fair, Babbage was much weirder)


I was surprised to learn that as well.

I learned about it in Walter Isaacson's Innovators.


> because Babbage and Menabrea had previously created a few simple example programs.

That almost sounds to me like saying that the Wright brothers "made a few simple flights".

> first to invent foundational control flow structures such as loops

I wonder how sigma notation fits into this. Clearly the notion of expressing arbitrarily repeated operations using a fixed amount of information (which is what a loop is, essentially) was known at least to Euler.

Also, the fact that the machine enabled these things in the first place (unlike even some of the later machines such as Z3) suggests that its designer was either aware of this necessity to begin with, or at the very least in possession of preternatural prescience. In that case the use of these features in some programs but not in others would be not a matter of inventing them in the former programs but instead a matter of choosing to exploit existing hardware features, or declining to do so, depending on what program you're looking at.


> I wonder how sigma notation fits into this. Clearly the notion of expressing arbitrarily repeated operations using a fixed amount of information (which is what a loop is, essentially) was known at least to Euler.

You can even go further back. Algorithms with loops were known already to Babylonian mathematicians. So you don't need to resort to preternatural prescience.

The Z3 was not intended as a general computing device but as a practical help for engineers. Because of that you can't say it was missing something it didn't need to do its job. Whereas when Zuse designed Plankalkül loops and conditional branches where naturally included in the design.


> That almost sounds to me like saying that the Wright brothers "made a few simple flights".

Richard Pearse gets written off in the same way to elevate the Wright brothers flying accomplishments.

Pearse was just perusing powered flight as hobby in rural New Zealand, didn't bother informing the press and didn't bother even telling the government until WWII, 40 years later, about his flights and engineering designs.

https://en.wikipedia.org/wiki/Richard_Pearse


Not sure what improvement Pearse made over, say, Ader.


Pearse archived semi-controlled flight 300 cm above the ground verses Ader's uncontrolled ground effect flight 20cm above the ground.


I have no idea what is "a semi-controlled flight". Is it like "semi-riding a bike"?


Since the notes attributed to Lovelace were written as part of some scribe work she was doing for Babbage, what indication is there that the notes are her own original work, and not something that Babbage asked her to write? Don't get me wrong, she was clearly a very intelligent woman.


People that say Ada was the first programmer must think Babbage came up with the first general purpose computer then never wrote any instructions for it.

Maybe the first programmer who wasn't also a hardware engineer.


Defining the first person to do anything is almost futile. No one exists in a vacuum and most first were standing on the shoulders of technological accomplishments far outside of their own field.

That said, I'm sure in the case of Ada Lovelace there is at least some element of my misogyny involved.


Awkward typo.


>She didn't write the "first program" but she was the first to understand what computers would be capable of doing

Or merely the first to express it? I'm pretty sure Babbage himself, as the inventor, understood well what computers would be capable of doing.


He was focused on using his machines to efficiently generate mathematical tables. It was Ada who realized the potential of the analytical engine as a universal computer. She even wrote that given a good numeric representation for sound, one could program the analytical engine to generate algorithmic music based on mathematical rules. Babbage himself wrote examples of programs for the engine, but they were all very simple examples of numerical calculations that would be applicable for generating mathematical tables.


Well, the often inaccurate title of "first programmer", which clearly Charles Babbage was for at least his machine. Perhaps it would be better to describe Ada Lovelace as an early enthusiast / evangelist / adopter.


I'm not sure that inventors always understand the consequences of their inventions. Often, they are either focused on first-order capabilities and neglect the larger significance; or focused on visions of the future but unable to turn them into useful products in the short term.


Jeremy Campbell's The Grammatical Man would like a word with you on Lovelace actual documented contributions.


> I don't understand why Schmidhuber continues to ignore this crucial point.

From TFA:

> There is a seemingly minor difference whose significance emerged only later. Many of Gödel's instruction sequences were series of multiplications of number-coded storage contents by integers. Gödel did not care that the computational complexity of such multiplications tends to increase with storage size. Similarly, Church also ignored the context-dependent spatio-temporal complexity of the basic instructions in his algorithms. Turing and Post, however, adopted a traditional, reductionist, minimalist, binary view of computing. Their machine models permitted only very simple elementary binary instructions with constant complexity, like the early binary machine model of Leibniz (1679)[L79][LA14][HO66] and Zuse's 1936 patent application...

I presume by "seemingly minor" Schmidhuber implies "it turns out to be very important".


Not to dispute your point, but note the lambda-calculus-as-a-reasonable-machine papers from the last couple of years: it turns out (despite the seeming general understanding to the contrary in the past) that polynomial interpreters for some meanings of “lambda calculus” (including IIRC a weird very general one, call-by-need on open terms) are perfectly possible, meaning that many fundamental questions of computational complexity can be straightforwardly expressed in terms of the lambda-calculus machine as well. Linear-time interpretation (thus e.g. classes L and NL) seemed out of reach last I checked, though.

To echo my other comment here, it’s not that Church himself knew that. Even five years ago people did not know that. It’s not a question of priority. But I find it fascinating and reassuring nevertheless, as actually programming e.g. a self-interpreter for a Turing machine—or for general recursive functions, or for combinatory calculus—is an absolute slog.


Can you give some links to these recent papers? Sounds interesting. Thanks!


Didn’t have access to my archive at the time, so got some of the details wrong it seems (e.g. CbV not CbN, result for time is older than I remembered). Main thrust should remain valid, but be careful. In any case, here you go:

Dal Lago, Martini (2008). “The weak lambda calculus as a reasonable machine”. DOI:10.1016/j.tcs.2008.01.044, apparently not on arXiv.

Accattoli (2012). “A fresh look at the lambda-calculus”. DOI: 10.4230/LIPIcs.FSCD.2019.1, apparently not on arXiv.

Accattoli, Dal Lago (2014). “Beta reduction is invariant, indeed”. DOI: 10.1145/2603088.2603105, arXiv: 1405.3311 [cs.LO].

Accattoli, Dal Lago (2016). “(Leftmost-outermost) beta reduction is invariant, indeed”. DOI: 10.2168/LMCS-12(1:4)2016, arXiv: 1601.01233 [cs.PL].

Forster, Kunze, Roth (2020). “The weak call-by-value lambda-calculus is reasonable for both time and space”. DOI: 10.1145/3371095, arXiv: 1902.07515 [cs.CC].

Now that I’m looking through bibliographies, there are apparently other relevant intervening papers in the vicinity, even by some of the same authors, but these are what I’ve looked at personally.

Bonus: the paper

Hackett, Hutton (2019). “Call-by-need is clairvoyant call-by-value”. DOI: 10.1145/3341718, apparently not on arXiv.

is unrelated to questions of complexity but is just so absolutely lovely.


It is important for engineering, but as far as I understand it is not that important for math. E.g. Gödel solved the completeness and consistency problems, and Church first solved the decidability.


The OP itself documents:

> Nevertheless, according to Wang,[WA74-96] it was Turing's work (1936) that convinced Gödel of the universality of both his own approach (1931-34) and Church's (1935).

Unless "according to Wang" is meant as "I don't know if I believe it", then apparently it's documented that Godel himself thought Turing's contribution was major and shed important light on the mathematical implications of godel's own work.

There's never any one person that invents anything, it's always built on work that came before.

Reading the OP, I got increasingly bored and tired... ok, what's your point? Yes, clearly Godel and Church especially did foundational work without which Turing's work would not be possible -- and I don't think anyone denies it, anyone with any kind of computer science education is surely familiar with Godel and Church. It's not like they are languishing in obscurity, they are both very well known and respected! Godel especially is widely considered a real giant in his fields. I am as confident that neither Godel nor Church is going to be forgotten for many generations.

But Turing made important contributions that took important steps necessary for the development of CS as a field. It's not a mystery or undeserved why Turing's name ends up remembered.

The OP's point is just that any singular "inventor" is always building on work of may who have come before? OK, sure. Boring. So we should never promote anyone as making important foundational contributions? Well, people aren't gonna stop. Boring.


It's also the case that Church only knew that the notion of function defined by the lambda calculus was coherent once the Church-Rosser property was proven, and that was not published before Turing submitted his article.


> I don't understand why Schmidhuber continues to ignore this crucial point.

It is difficult to get a man to understand something when his salary depends upon his not understanding it.

But wait a minute, you might say, facts are facts.

And if everyone had the time and resources to discover and digest every fact, your understanding would be the end of it.

But everyone doesn't have time and resources. To compensate, we rely on others to curate facts for us. When we encounter an internally consistent subset of facts that suits our ideals and our interests, we adopt that point of view.

There are infinitely many subsets of curated facts that can be presented as internally consistent. That's why there are so many different points of view.

What bearing does this have on Turing's role in computer science, and his latter day fame in a society which came to be defined by silicone logic?

The First Computers Were Human (and Mostly Women)

https://mjosefweber.medium.com/the-first-computers-were-huma...

Turing, in addition to stating an ontology of computing, dared to invite the question, what is the difference between a computer and a human?


>https://mjosefweber.medium.com/the-first-computers-were-huma...

I cringe when I read ill-researched essays like these because it informs a relationship between women and computing that back then genuinely did not exist.

For the vast, vast majority of women in the field of computing at this time, they were nothing more than glorified calculators. Yes, there were a few women scientists and mathematicians (by few, I mean literally handfuls). Yes, it was a male dominated field.

But the overwhelming majority of women working in this industry at this time did secretarial style busywork. It wasn't glorious. It was underappreciated. It sucked.

These types of essays are a genuine attempt to rewrite a history that did not exist. It is literary gaslighting the likes of which the article we are discussing right now is attempting to rectify.


> These types of essays are a genuine attempt to rewrite a history that did not exist.

I'm not sure what you are saying? Could you clarify?

What is "secretarial style busywork"?

Is this article also "ill-researched"?

https://www.smithsonianmag.com/science-nature/history-human-...

What would a better researched view of history say?

I'm not trying to troll you or make you write an essay here - just trying to glean the main points of what you believe so I can see your point of view.

Are you familiar with the view that the "Turing Test" was first conceived by Turing as a challenge to discern the gender of a human hidden behind a screen?

In my comment I was trying to point to Turing's interests in the nature of gender and humanity. I find that expansive and prescient in the sense that "computing" to us moderns is more about NLP than calculating maths, and yet, it's all the same thing under the hood.


Computers we use are nothing like Turing machine and if you want to credit one person for computer design than it would be van Neumann. Media needs hero worship. It doesn't matter who will be hero but they must have one. This applies to every field. Every scientist, enterprenuer, artist is overblown. Older they are, more overblown they are. The worse case is movie actors who literally everyone knows are not "hero" but just the human puppets who move and behaves as how directors asks and writers dictact, but in the eye of people they are literally called "hero" and "star". The people moving these puppets are only mentioned in credits who no one reads.


Best I can tell, von Neumann stole the design that bears his name from John Mauchly and J. Presper Eckert. von Neumann doesn't seem to deserve as much much credit as he is given. I'm not saying he does not deserve credit for his other work.


Both you and parent are correct: Von Neumann knew how to play the game to end up with "the credits people do read".


And Schönfinkel’s combinatory logic ten years earlier (we are talking priority here, right?) was even more awkward.

There’s also the point (mentioned e.g. by Wadler in his “Propositions as types” talk) that Gödel didn’t actually realize how pervasive universal computation was and actively pushed back against Turing proclaining the equivalence. This is not to accuse him—he wasn’t particularly obstinate or unfair about it and, furthermore, came to understand the idea fairly quickly. But it’s not at all uncommon in science for the one who actually invented a thing to fail to realize what it was that they just invented, and somebody else comes years or in some unfortunate cases decades later and announces the founding of a new area of knowledge on that already-discovered site. Whom the later naming will associate with the discovery is a toss-up, but it’s generally fair and in good taste, I think, to mention both.


As someone said to me recently, Schmidhuber may have a few points about lack of credit regarding his work but him having invented everything to do with modern NNs is a wild claim and regardless he has cemented himself as a boor and continues to double down. This is just another example of that doubling down.


"For example, it was claimed that Turing founded computer science.[...] Turing's 1936 paper provided the "theoretical backbone" for all computers to come."

So your argument is, because it is unclear how to "physically implement the arbitrary-function minimization operator", Turing is the better "theoretical backbone" and has founded computer science?


The question in the air in 36-ish was something like, "OK, clearly we can mechanically compute things like the sum of two numbers or the prime factorization of a number. But are there other things that can be computed with a discrete and deterministic mechanism?" (At the time they called these "effective" functions.)

Church had piles of stuff that he and his students produced that were computable with the lambda calculus. Basically, all of the natural number functions that a person thinks are intuitively mechanically computable, those folks had showed how to lambda compute. With this evidence, he proposed to Godel (they were working together at Princeton at the time), who was considered the world's expert, taking "lambda-calculable" as a mathematically precise version of "mechanically computable." But Godel was notoriously careful, and he did not accept the thought as perfectly clear.

That is, they had a subset of the things that could be mechanically computed. But was it the entire set? Or was there something that some discrete and deterministic mechanism could be made to do that would lead to more than Church's set?

Imagine you are Dedekind and you are looking at the primitive recursive functions and (1) any such function is intuitively mechanically computable, and (2) you are able to work out how to define a pile of things like prime factorization of an integer using this system. You might well conjucture that this is it. But we know (and Godel and Church and Turing knew) that this is not it, that you need to add unbounded search of some kind (this is what minimization does) to get more things that are intuitively mechanically computable.

I agree that the minimization operator is not as easy to picture with gears and levers as some of the other operations. But the issue in 36 was that a person could worry that there was even more. Just as minimization is not as easy to picture and the need for it didn't hit Dedekind with great force, could there be something else out there that we have all missed?

That worry disappeared when Godel read Turing's masterful analysis. It convinced him that this is what a machine can do. He wrote, "That this really is the correct definition of mechanical computability was established beyond any doubt by Turing.'' Church felt the same way, writing that Turing machines have "the advantage of making the identification with effectiveness ... evident immediately.''


I'm still a little confused. It seems like Turing came up with something that works, and clearly fulfills precisely what Godel and Church and Turing were all looking for; but it also seems like it's a mathematically inelegant solution. Is it possible that in the future we'll find a way to show that Gödel's mu-recursive functions or Church's lambda calculus also precisely describe 'what a machine can do'? If so, it seems that from a mathematical standpoint, either of those would be a better foundation to start from. (I'll totally agree that from a teacher's perspective, the Turing machine is the better explanation.)


> Is it possible

Yes, the set of functions computable with mu recursion, or with lambda calculus, is the same as the set of functions computable with a Turing machine. (Turing in his original paper showed that the set is the same for his system and Church's, and the proofs for mu recursion and many other systems are well-known.)

> I'll totally agree that from a teacher's perspective, the Turing machine is the better explanation.

When I learned this, the instructor did not use Turing machines. They used mu recursion. That has the advantage that you don't first define a machine and then derive the set of functions, instead you go straight to the functions. But I agree that Sipser (which I understand to be the most popular text today) defines a computable function as one that is computed by Turing machine, and to my mind that has the advantage of honestly suggesting to students what they are about.


Thanks, I think I understand now. I thought there was a distinction between the mathematical idea of what computation is, and the engineering we've invented to implement that computation, and so I didn't really get the significance/literalness of what you were saying about mechanization.

It does seem weird to me though that we're letting our engineering limitations determine how we think about these things mathematically.


It's not as simple as engineering limitations determining how we think.

Turing machines do have mathematical advantages in some areas. See this siblings comment: https://news.ycombinator.com/item?id=28541800


I'm convinced that David Harland's Rekursiv[1:] machine _is_ the manner by which lambda et al. might be implemented at the machine level. Unfortunately, Rekursiv seems to have died an ignominious death, with the last Rekursiv chip having fallen off the side of a steamboat (apocrypha; I remember having read this, but I'm unable to find the original citation.)

The Rekursiv advantage is its ability to do recursion on the bare-metal. It's described as an object-oriented architecture. Its memory was a persistent store of objects, with each object having its size, type, and position known in memory.

[1] https://en.wikipedia.org/wiki/Rekursiv


I guess I'm confused as to why this is more than simply an interesting formalism. Complexity theory and models of computation are largely based around the Turing Machine model. Lambda calculus is an effective lens to design programming languages and prove equivalence between programs. We know by way of the Church-Turing Thesis that these two models are equivalent. The Turing Machine model is both better studied from a computation perspective and has much more practical realization; what's the point in creating something like this? It feels a bit like silly lambda calculus fetishism, but again maybe the value here is the actual computation formalism itself.


For me, personally, I really like the lambda calculus as a tool to organize and better my computational thinking.

I came into programming/computer science from a mathematics degree; I read some old treatises on recursion theory[1] and fell in love. I couldn't ever quite wrap my head around the Turing Machine formalism, but kept at it for a while. Finding Barendregt's paper [2] was a huge shock! I grasped it much quicker. So, yes, lambda calculus and the Turing Machine formalism are equivalent in explanatory power, but there are also reasons someone might prefer one to the other. So, yes, for me, the value _is_ the formalism.

As to why I think the Rekursiv would provide a good platform for implementing lambda calculus on the bare-metal, that's entirely due to Rekursiv's memory model advantage and the fact that it has a user-writable ISA. Why would someone choose to implement the lambda calculus on bare-metal? You call it "fetishism," I call it fun!

More generally, I just really like the idea of having a machine with a user-writable ISA.

[1] Theory of Recursive Functions and Effective Computability: https://openlibrary.org/books/OL2738948M/Theory_of_recursive...

[2] Introduction to Lambda Calculus: https://www.cse.chalmers.se/research/group/logic/TypesSS05/E...


FWIW Nothing wrong with having fun with computing, and implementing lambda calculus on bare-metal can be as fun as any other computational exploration, so good on ya!

Thanks for clearing up that it's the formalism you find interesting. Also, to offer a counterpoint, I'm also from a math background, but I was more of an analysis person (as much as one can be in mathematics where it's all related) than an algebra person, and when I did some FP research, it often felt like where all the algebraists go to play computer science. I feel like analysts are underrepresented in PLT (and overrepresented in complexity theory!) but this is already going off-topic, so cheers.


Nah mate, s'all good! It's great to hear your feedback; I am very much of the algebraist spirit myself (I barely passed my Rudin-based real analysis course). Our experiences definitely align. FP feels much more like my favorite parts of math.

Out of curiosity, can you identify any areas in PLT that could be made more analyst-friendly?

Intuitively, it feels that PLT is almost necessarily of the algebraist; to me, one of the big divides is the discreteness of algebra vs the continuity of analysis. Would it help if there was a PLT that exhibited a greater degree of continuousness? If so, what do you think that might look like?


Your comment made me think a lot! Thanks for that. If I had to "capture" what made analysis interesting for me, it's not just the notion of continuity, but the idea that we're analyzing the behavior of an object in the concrete instead of the abstract. That means taking an object and deriving all sorts of behaviors, instead of building up algebras from simple group/ring operations. To bring this back into PLT, it would probably mean the ability to place computational complexity bounds on functions/methods. Something like Mercury's Execution Modes https://www.mercurylang.org/information/doc-latest/mercury_r...


Very cool! I'd never seen this formalized, but it definitely has the color of some "hacks" I put together in Python (specifically the idea of "destructively updating" an item in an index, as opposed to appending an element to the end).

That's also a very interesting perspective on analysis. To get a better feel: is your joy of analysis in getting into the "internals" of an algebra, to directly derive properties of elements within the algebra as opposed to relying solely on global properties endemic to the construction of the algebra?


I always mention the Rekursiv in my talk about Smalltalk computers. Its problem was being a CISC in the era of RISC. The Manchester Mushroom from just a little later had initially the same problem but went through a major redesign when they saw the JIT compilers for Self on the Sun Sparc processor.


I'd never heard of the Mushroom! Thank you for making me aware of it! A quick DDG search returned no results; I'd appreciate any links you have on the topic!

I'd also love to see your talk! Do you think a RISC Rekursiv could be achieved? What value, if any, do you think such might have in our current world?


This is the current Mushroom page:

http://www.wolczko.com/mushroom/

The slides for my 2019 talk about Smalltalk computers (in LibreOffice and PDF formats):

http://www.merlintec.com/download/2019_slides_jecel_fast1v2....

http://www.merlintec.com/download/2019_slides_jecel_fast1v2....

and the video (1 hour and 13 minutes):

https://www.youtube.com/watch?v=tATpzsyC6OA

If you replace the Rekursiv's special microcode memory with a cache to allow microcode to live in main memory you will essentially have a RISC version of the machine. I have adopted this solution in several of my own designs.


Absolutely fascinating stuff! Thank you for the links, friend, and welcome to a highly-treasured spot on my hard drive; all of these materials are going directly to my ~/Research directory! Computer architectures are an area I'd love to enter when I'm more experienced, so these are an incredible source of inspiration.

I'm also deeply in love with the merlintec website. It's refreshing to see a website which has persisted since 1999(!), and without a lick of JavaScript it would seem!

Could I purchase a Merlin 6 or a Pegasus 2000?

Speaking of which, the way that the Pegasus 2000's eGUI documentation[1] is written is incredibly dear. The "world" and "heaven" analogies are great!


Thanks for this link, really interesting stuff. For a while there was a fashion in OS research for orthogonal persistence but this is much deeper and more elegant.


Absolutely! As mentioned, I'm very much a novice of computer architecture. It kinda blew my mind that Rekursiv was unique in its abilities. Also fascinating are LISP Machines, which seem to have received quite a bit of love from HN already.

If I may ask, from your perspective, what's more elegant about Rekursiv's design? Is it the philosophy or implementation? I'd also love any links to orthogonal persistence!


That's the central argument in the Church-Turing Theory isn't it? Church felt very strongly that the difference between his and his students' "elegance" and Turing's "practical" was a difference only in abstraction and that the two models were equivalent and translatable (you can write in one abstraction and convert it to the other).

That theory continues to bear fruit as the history of programming languages is almost entirely about bringing new and "better" abstractions to problems and then translating them to "dumber, more practical" machines. We have programming languages today modeled directly off the elegance of (though now sometimes still a few steps removed from being direct implementations of) the lambda calculus and mu-recursive functions, and the amazing thing is that they work great even given how "dumb" and "inelegant" our machines can be in practice.


> Is it possible that in the future we'll find a way to show that Gödel's mu-recursive functions or Church's lambda calculus also precisely describe 'what a machine can do'?

See also https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis


> we'll find a way to show that Gödel's mu-recursive functions or Church's lambda calculus also precisely describe 'what a machine can do'?

This is already proven to be the case. Mu-recursion (which IIRC is not Godel's general recursive functions, despite what Wikipedia says; Kleene was the originator of the mu operator. Godel's general recursive functions are defined in a separate, but yet again equivalent way that directly extends primitive recursion), Turing machines, and the lambda calculus are all proven to be exactly equivalent to each other. The fact that these three independent approaches to computability are all equivalent is why we have strong informal justification that the Church-Turing Thesis (a non-mathematical statement) holds.

Separately, there's a sentiment that I've seen come up several times on HN that somehow the lambda calculus, mu-recursion, or general recursion is more "mathematical" and Turing machines are less "mathematical."

I want to push back on that. The mathematical field of computability is based almost entirely off of Turing machines because there are many classes of mathematical and logical problems that are easy to state with Turing machines and extremely awkward/almost impossible to state with the lambda calculus and mu-recursion (this is consistent with my previous statement that the three are all equivalent in power because computability theory often deals with non-computable functions, in particular trying to specify exactly how non-computable something is). The notion of oracles, which then leads to a rich theory of things like Turing jumps and the arithmetical hierarchy, is trivial to state with Turing machines and very unwieldy to state in these other formalisms.

Likewise the lambda calculus and mu-recursion (but not general recursion) provide a very poor foundation to do complexity theory in CS. Unlike Turing machines, where it is fairly easy to discern what is a constant time operator, the story is much more complicated for the lambda calculus, where to the best of my knowledge, analyzing complexity in the formalism of the lambda calculus, instead of translating it to Turing machines, is still an open problem.

There is indeed a mathematical elegance to Turing machines that makes it so that most of the mathematics of computability is studied with Turing machines rather than the lambda calculus.

The lambda calculus on the other hand is invaluable when studying programming language theory, but we should not mistake PLT to be representative of the wider field of mathematics or theoretical CS.

EDIT: I should perhaps make clear that if I put on my mathematical hat, mu-recursive functions seem like the most familiar formalism, because they align with a common way families of things are defined in mathematics (specify individual members and then generate the rest through some relationship). However, I would contend that for the majority of mathematicians outside of computability theory, the lambda calculus and Turing machines seem equally "strange."


In a nutshell, Church asserted effective computability by saying "look what you can do with the lambda calculus". Turing took the philosophical approach saying "this is what it means to compute". To Godel, Church's argument was incomplete. Turing provided the direct argument. Godel was convinced.


You seem like someone who might have a good book about this bit of history? Or perhaps a blog?


About the Theory of Computation in general. A draft: https://hefferon.net/computation/index.html :-)


Wonderful explanation, thank you.


I don't think it's important to quibble over who's overrated or underrated among these giants of math and CS who already get tons of recognition (I'm glad Schmidhuber brings many other historical names into the narrative).

However, yes, I do think that 'mechanization' or physical implementation is a crucial piece of Turing's contribution that is wrongly ignored in this article. And I think without mechanization, there is no CS as we understand it.


I can only repeat my comment from down:

"Likewise, Konrad Zuse never got a Turing award despite having created the world's first working programmable general computer 1935-41. [...] It was pointed out that none of the computers built during the 1940s were influenced in any way by Turing's 1936 theoretical paper, [...]"


As far as I know, Konrad Zuse didn't prove that this strategy was a universal model of computation. In contrast, Turing proved that his universal machine could emulate any other machine, given the right program.

In my view, Turing's contribution is providing a plausible definition of computation along with a deep and comprehensive theoretical characterization of the properties of this model of computation. This is why Turing machines form the basis of theoretical computer science, and not other models such as lambda calculus. I think saying that Turing machines were adopted since they were merely more convenient is highly misleading.

I think this pattern repeats a lot: There are many cases where you can point to multiple people who invented similar ideas around the same time, but it is typically the person who provided the most deep and comprehensive treatment of the subject that ultimately gets most of the credit. This depth is not conveyed in pop science attributions such as "Turing invented computation", but this doesn't mean Turing doesn't deserve the credit.


I don't believe anyone has received a Turing award for creating a working computer.


https://en.wikipedia.org/wiki/Maurice_Wilkes

His Turing award citation: "Professor Wilkes is best known as the builder and designer of the EDSAC, the first computer with an internally stored program."


I think Wilkes got his award for his software contributions despite being well known for his hardware efforts.

The 2009 award to Chuck Thacker, on the other hand, was clearly based on his contributions to hardware. I have the impression that ACM had a change in policy around that time.

But officially I seem to be wrong:

https://amturing.acm.org/bysubject.cfm?cat=16 https://amturing.acm.org/bysubject.cfm?cat=15

In the "hardware" category Wilkes is listed as the only one, while Thacker, with Brooks, Cocke and Wilkes are in the "computer architecture" category.


Not OP, but I agree with them.

The word computer means multiple things. In one sense it the abstraction of universal computation. Imagine a world where actual physical computers didn't progress to universal computation, but were stuck being purpose built to the present day. The field of computer science would be utterly different because they couldn't actually compute anything with their science. They could just discuss computability in an abstract sense. It'd be like physics without the particle colliders or telescopes or lasers.

I think of the founders of computer science more like the founding fathers of America, rather than a single guy named Turing, but some are more memorable than others.


> Imagine a world where actual physical computers didn't progress to universal computation, but were stuck being purpose built

It’s not clear that you need a theory of computation to build a stored process computer. You might not clearly understand the theoretical capability of such a machine , but that wouldn’t prevent you from doing many useful things with it.


The article writes about your point of general computation and purpose built computers:

"Likewise, Konrad Zuse never got a Turing award despite having created the world's first working programmable general computer 1935-41. [...] It was pointed out that none of the computers built during the 1940s were influenced in any way by Turing's 1936 theoretical paper, [...]"


> Turing Machines, on the other hand, were instantly obviously mechanizable. It was clear that one could build a physical machine to run any Turing program without human input.

Harold Abelson points out in one of his lectures [0] that computer science isn't about computers any more than biology is about microscopes.

From that perspective, it is clear that Turing found an existing discipline of computer science and made major contributions to it. But it doesn't really seem right to say that he invented or founded computer science.

[0] https://www.youtube.com/watch?v=-J_xL4IGhJA&list=PLE18841CAB...


Computer Science is about what computers can do. To decide the latter, you have to first decide what a computer is. Turing Machines were the first abstractions that intuitively captured what it means to compute something


Sure, if you take "computer" to mean "something that computes". In that case it would include humans. There was a great deal of research into things that can be effectively computed that goes back even before the focus of this article. And of course "computer" used to refer to humans who computed before the invention of mechanical computers.

But it's certainly not the study of what mechanical computers can do. Among other things, mechanical computers all have bounded resources unlike Turing machines.


A whole bunch of Computer Science is only relevant for actual computers not Turing Machines. When I was (a student) at University they didn't teach Wait-Free versus Lock-Free concurrent algorithms, but that's an important 21st century topic because actual computers today are capable of many simultaneous operations, and it's totally possible to write a program that isn't even Lock-free and may literally make no progress despite spinning its wheels.


Yes that is true. There are applied sub-disciplines for all of the science. Typically you would call that something like "applied computer science" or "computer engineering", but CS is new enough that it hasn't split like that yet.

Nobody is saying that's not important. But the field as a whole began before mechanical computers were invented or practical, and there are several subfields that are basically indistinguishable from mathematics.


I mean, if you want to relegate CS to only mean theoretical CS, then you’d be in a small population: most theorists don’t want that, even. One of the cool things about CS is the rich interplay between applications and theory, and separating the two would only harm both.


Nobody is trying to relegate CS to a theoretical discipline. But theoretical CS exists, so it doesn't make sense to claim CS is the study of mechanical computers. Since it started as a theoretical discipline, it makes even less sense to say that the field was founded when we first figured out how to practically mechanize the ideas in CS.


I mean, my university still has separate Chemistry and Astronomy departments, but at yours following your preferred nomenclature, how do they distinguish among all the resulting departments called stuff like "Applied Philosophy", "Applied Philosophy" and "Applied Philosophy" ? Or is it that for some reason you think only Computer Science should be singled out in this way?


I sense that you're trying to make a joke.

In any university, Chemistry is not the study of beakers nor is astronomy the study of telescopes.


But Chemistry cares about actual chemicals and Astronomy cares about our actual universe. Likewise, Computer Science is overwhelmingly concerned with actual computation. You aren't identifying a distinction here.

Both Software Engineering and Computer Engineering exist, as sub-disciplines, but it doesn't make sense to argue that somehow studying Graphene (a chemical which exists) is Chemistry while studying non-blocking algorithms is only Applied Computer Science somehow just because such algorithms could be used on an actual computer.


Mechanical computers approximate Turing Machines, just like NP approximates RE: instead of asking “does this program halt?”, we ask “does this program halt in N steps?”.

If N is sufficiently (polynomially) large, the two are approximately equal.


> Mechanical computers approximate Turing Machines

Yup, so do humans with paper and pencil.


that's not the case, according to the article. If anything the article implies the opposite. Turing machines were a re-abstraction of Godel's computational model that provided a path to mechanical realization.

Also if you ever work with the turing machine (NFA hooked up to an infinite recording tape) it is not at all "intuitive" that this construction comprehensively captures the world of computation.


Godel's computational model does not intuitively capture what it means to compute. Godel himself was unconvinced that his model was universal, and it took Turing's paper to convince him that his model, lambda calculus, and Turing machines were equivalent and universal.


> Similarly, there's no clear way how to mechanize lambda calculus.

Is Lisp such a mechanization?


Any computer (with enough resources) can simulate any other computer. So if you have Lisp by running such a simulation (interpreter) on your PC it is not the same thing as a direct implementation of lambda calculus.

The famous Lisp Machines of the 1980s were reasonably traditional Von Neumann computers with nice features like tagged memory and optimized stacks.

Much closer to mechanizing lambda calculus, but still a bit Von Neumannish, is the SECD virtual machine for Lisp and friends:

https://en.wikipedia.org/wiki/SECD_machine


I'd say no, not on its own. It's not clear how to implement a Lisp program in hardware. You need some steps which today involve a von Neumann architecture and an imperative assembly language that implement Lisp constructs...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: