AFAIK Carolina only uses the D-Lev pitch circle for coming in on pitch, not for active playing. That's pretty typical for players who have learned on an analog Theremin, where they have necessarily been guided only by fingering techniques and their ears. Many Thereminists plug a guitar tuner or similar into their Theremin for this, but those sorts of tuners are rather sluggish. When you have a highly responsive tuner, then playing turns into an almost "paint by numbers" experience, and you are much more aware of the key and other song structure.
Do you know if there are any affordable "theremin" like MIDI control devices that enable free assignment of parameters to the antennas? For example, I could have both antennas control pitch, independently of each other, and amplitude would be set to a fixed value via a knob or on a device that the "theremin" sends its output to.
The D-Lev has a fairly extensive MIDI implementation, and you can control any CC with the volume hand (7 or 14 bits), so perhaps something like this would be possible if the synth it's driving is flexible enough.
There are 50 or so kits spread out all over the world, some in the hands of the world's best Thereminists, which has been quite gratifying. But the project has been in a bit of a hiatus while I do more R&D, and the current tariff situation isn't exactly filling me with enthusiasm. Enclosures and antennas have been a burden for some, a wine box build is probably your best bet. I don't mind supplying hand-wound coils and any guidance you may need. My contact info is on the support page.
A stack processor will always be less efficient than a two or three operand register-based processor. This is because all of the the registers can be used directly without any stack manipulations to access them, and the two or three operand operations usually include a move.
If you examine any stack language code, you should consider any stack manipulation to be a NOP type of inefficiency. And when a virtual stack machine is implemented on non-stack hardware, these inefficiencies are compounded.
I think you have to have a larger view of what is 'efficient' historically instruction fetch bandwidth was scarce, caches were expensive, often tiny or non-existent - stack machines with one byte opcodes (look at the Burroughs large systems) were efficient in context.
We've switched to RISC (despite the above I'm a big fan have built several) these days largely because there came a point where we could push everything onto a chip, RISC started to make sense at about the time where cache went on-chip (or was SRAM closely coupled to a single die) - and for the record I think x86 has survived because its ISA was the most RISCy of it's original stable-mates (68k, 32k, z8k etc) - x86 instructions make at most 1 memory access (with one exception) with simple operands
Two and (especially) three operand processors spend significantly more space encoding register indexes, though. It's worth it to avoid forth-style pop swap rot, tuck u* spaghetti, but register machines just push the NOP-type inefficiency somewhere else. Eg, obviously most 3ops are the last use of at least one source operand, so there's usually no point encoding separate src1 and dst fields, but also many instructions immediately reuse (often the last and only use) the destination register of the previous instruction as a source.
I'd kinda like to see a machine with a intermediate, one-operand style of instructions. Eg:
add tos stN # *sp += sp[N]
add stN pop # sp[N] += *sp++
ld [stN] # *--sp = mem[sp[N]]
I think Henry Baker argued otherwise saying that at the circuit level a stack processor can be designed to operate faster (like shorter path, less clock cycles) compared to register ones.
The problem with trivial ops like stack manipulations is that they don't really do anything useful, but they can take as long as multiply in the pipeline.
Stack machines made more sense when memory was limited (small opcodes) and there were no hardware multiplies, but these days they make no sense.
I don't understand all the vague nostalgia for something that never could have panned out outside of the creaky old Apollo flight computer or something.
Fair point. That said it's not nostalgia, Baker's article was about linear logic and that forth was naturally fit for this. Cue Rust borrowing and you see why it may be of interest (if baker was right of course).
I would argue that a language targeting bare metal type applications should at least be minimally aware of that underlying hardware. A single stack "virtual machine" type language is generally a terrible fit for the 2 and 3 operand register-based processors which dominate the landscape.
Every interview of Chuck Moore that I've read has contained zero push-back for his rather wild claims. It's entirely possible for an industry to do go down the wrong path for a while, but at some point, if Forth and stack processing were the giant killers they were cracked up to be, you would see them enter and dominate at least some portion of the mainstream. You can't say they haven't been given enough time.
It seems there are many "Forth curious" programmers out there, but they aren't being given the full picture with the various puff pieces and vanity projects floating around that never really go anywhere. It's almost a culture of victimhood.
I’d say calling a stack-based VM terrible is overstating the case a bit. In [1] the number seem to bear out that converting from stack-based to register-based yields an average, adjusted performance increase of approx. 25%. Also the transition results in an increase of code size of approx. 45%. That speed up is a non-trivial amount, but so is the code size increase, which is a valid area of concern for embedded/resource constrained engineering. Also, while I tend to STRONGLY agree about awareness of hardware at the language level (although I don’t think this should be limited to just embedded environments) the methods in [2] are automated and simple ways the Compiler can take stack-based code and create optimized register-based instructions.
As an aside, there is probably a different argument to had about whether a stack-based VM as the mental model of a language is beneficial, but as a said, that is a vary different argument than discussing the technical ability to transpire a stack-based VM to a register-based one.
I briefly skimmed [1] and their more sophisticated translation to a register VM yielded a 25% increase in code size, and not the 45% you stated? Regardless, VM to VM really isn't my point.
I still don't get the positives of why the programmer should be presented with a stack machine SW model when there is no stack machine type stack to be found in the HW. Programmers with no stack machine / language experience probably think (as I did at one point): "the stack on my HP calculator stack works great, why not base a language on that?" but it scales poorly, it gets really hairy if the stack is the only place to store things, and it's a major headache keeping the stack from going out of sync (hence Forth's clear I/O comments regarding subroutine stack use).
You are correct about the 25%. That was a mistake (for anyone else looking, the 45% was a decrease in the executed instruction amount from stack to register.
Correction aside, I, at some level, agree with you. The question of appropriateness/benefits of a SM model is different than considerations of the effectiveness of the code. In terms of the Forth SM model and its explicitly imperative model, I am familiar with the various complaints/arguments/critiques for varying reasons. I certainly acknowledge there is at least some burden placed due to the stack mechanics. I happen to be particularly attracted to the approach and enjoy programming that way, but that is just a preference of mine. However, I also tend to look at Forth as imperative language with which to program in what Backus describes as ‘function level programming.’ I find the benefits as listed by Jon Purdy, author of the Kitten language, in his talk about concatenative code to be compelling enough that exploration and refinement of this idea space to be valuable.
All that said, I am perfectly willing to consider that, as presented in Forth and presumably in the proposed language, SM mechanics and models are a hurdle for users of the language.
In almost every meaningful interview of Moore, he explains the concept of virtual stacks. In his most recent interview, he literally explained how he designs all of his chips to have 8 physical stacks, while in the past he used (I believe) 16 stacks. You're misrepresenting Moore's claims in a way that can't be taken as anything but uninformed.
I believe there has been a wide variety of successful Forth projects including NASA probes.
Forth makes more sense when you can be like Chuck and design your own hardware. At that point Forth is more of a minimalist stack-based philosophy that consists of the minimal hardware + software design to accomplish something.
Is his way better? It likely has somewhere near the bare possible code and is very efficient system wise. If you wanted a minimal solution and could devote years to the project, then this is nice.
Does it make the best usage of developer time? Obviously not in many cases. In today's business world, you just slap some components together, do minimal acceptable testing, push it out and go to the next thing. Cost is important to the customer, so as long as somebody is willing to do that kind of work, everyone has to. Of course the downside is that we've accreted all this tower of abstractions and complexity going from OS to JVM to libraries and source. We already have tons of COBOL that can barely be maintained. Next will be the large Java and Python codebases.
The Forth success stories tend to be really, really ancient, and therefore almost irrelevant. Much like Chuck's arguments for the merits of stack languages / machines. Processor pipelines have to be at least deep enough to do a wide multiplication or you're basically looking at a toy.
I actually have designed my own FPGA soft core barrel processor, it's a special blend of register and stack machine. The blend occurs by placing stacks under the registers themselves. I believe this allows a low register count, 2 operand architecture to be more efficient than it otherwise would be, which minimizes opcode size, and sidesteps most the crazy you get when trying to shoehorn most processes into a single stack environment.
But has clear downsides as well, the main one being the stacks can become easily corrupted by any process using them - this is true of any stack machine but you strangely never hear it come up in conversations with Forth types. Stack processors can eliminate much traditional processor state, but the stacks themselves contain state, which is often overlooked.
Multi-core F18A technology, each of which is a simple 18 bit processor. IMO, anything less than 32 bits (with internal 33 x 33 = 65 bit multiply) falls into the primitive category abyss.
Moore is an incredible salesman, I'll give him that.
Moore is an awful salesman but a skilled technologist, you have it reversed. He only recently got buyers for his recent processors, and he doesn't handle sales in any of his business endeavors.
Also, the chips he made before GA were 32-bit. He deemed it unnecessary, and the GA chips run miles around them.
I meant that he's a genius at the whole stack machine / language shaman thing.
32 bits are unnecessary?!? I suppose a 18 bit machine would run a lot "faster" than a 32 bit machine given certain data sets and loads, but I wouldn't want to do any audio DSP with it.
Actually, audio DSP might be a bit better on them, too. They have the best power/instruction ratio on the entire planet, and given each chip has 144 entire computers on it, audio DSP should be no problem for them.
COMPLETE SYSTEMS: We refer to our chips as Multi-Computer Systems because they are, in fact, complete systems. Supply one of our chips with power and a reset signal, and it is up and running. All of our chips can load their software at high speed using a single wire that can be daisy chained for multiple chips; if desired, most can be bootstrapped by a simple SPI flash memory. Application software can be manufactured into a custom chip for a modest cost to further simplify overall system design. External memory is not required to run application software, but our larger chips have sufficient I/O to directly control external memory devices if desired.
Contrast this with a Multi-Core CPU, which is not a computing system until other devices such as crystals, memory controllers, memories, and bus controllers have been added. All of these things consume energy, occupy space, cost money, add complexity, and create bottlenecks. Most multi-core CPUs are designed to speed up conventional operating systems, which typically have hundreds or thousands of concurrent processes, by letting a handful of process execute in parallel as opposed to only one. They are not, typically, designed for significantly parallel processing, and they are even less well suited for simple applications than are their less expensive single-core progenitors.
It's meant for hyper-parallel applications, unlike Pentium cores.
If you actually look at the way Forth works you'll see that every stack manipulation wastes code space and real-time. Since there is only one data stack there are a lot of stack manipulations going on. Forth programmers are aware of this and do their best to minimize them, which tends to make their incredibly cryptic code even more cryptic.
If the definition of a low level language is one that bedevils the programmer with minutiae, the Forth is the lowest of the low. I don't understand the fascination others have for it, and don't understand how anyone can like it after actually programming with it. It's horrible.
It teaches you to set things up so that the code doesn't have to do stack manipulations and other minutiae in the typical case. Most other languages seem to encourage modules with general APIs and hard boundaries so that the caller has to unpack/repack/rearrange the data as it enters and leaves. Forth very deliberately encourages developing a holistic system, and it discourages wholesale code reuse from other projects and systems, which gives you the power and flexibility to refactor relentlessly, until only the essence of the computational solution remains.
Forth is definitely a difficult language to work with, particularly in a professional environment where managing turnover is massively important. When I dive in to some Forth code that I've written, to make even the smallest of changes, my brain has to be fully engaged, and that's a non-starter in most environments (Chuck probably thinks this is a good thing; why are we making changes to code we don't understand?). But I am still an avid proponent of learning and applying the principles of Forth, because of the results that it makes possible. It is quite eye-opening to see directly how a system can become 10x as powerful, with 1/10th of the code, if you are willing to do the work and embrace the "minimalist" (I would call it "essentialist") mindset.
One of Elbrus supercomputers was a stack machine that translated stack operations into out-of-order register operations, quite successfully. There was also a variant of Ada called El-76.
(see more about Pentkovsky for more interesting stories)
This means that you do not need to sacrifice speed for compactness.
Also, zero-operand ISA (stack machines are zero-operands) have what can be called normalizing property: if you have stack layout for inputs and outputs you have only one optimal way to achieve it. E.g., "( b a -- x) swap -" sequence won't be different in any place where you have to compute b-a (in the register's three operand typical RISC case there are 32^3 combinations). This means that if you can use something like [1] Sequitur algorithm to make code more compact than six bit per opcode.
This is, actually, what Forth programmers often do manually. And in untyped language, which will not complain about stack layout violations after refactoring.
But this does not mean that you cannot use, say, dependent types for Forth programs for better programming safety. And this does not mean that you can't benefit from single (or double) stacks or their alternatives. For example, Applicative class from Haskell's Prelude is good in expressing concatenative programs, just like ones in Forth, I think.
I have a hard time resisting the temptation of ROT >R myself at times, and it's true that I never pass arguments to the wrong function in C, while in Forth I do. Even assembly is less bug-prone in my experience. But I think there's probably a there there that I don't fully understand, and I want to.
Sometimes I wonder if Forth would be better off with no stack manipulation words. If you really need to swap, after all, you can X ! Y ! X @ Y @. Chuck left SWAP out of the x18 in the end.
In a stack oriented language the stack manipulations are the logic. You could make a more complicated compiler to optimize them or use local variables but most people/projects don't bother.
>Forth programmers are aware of this and do their best to minimize them, ...
This has never been true in my experience. The effort goes into insuring that the stack manipulations are correct.
No, but Forth provided one of the nicest, most immediate, and interactive environments to explore a problem. Instead of multiple files in a text editor, you build and test your app live and then dump it all to a text file. I have rarely had that experience with any other language except Smalltalk. I understand some Lisp environments act that way, but Forth was available places Lisp and Smalltalk were not.
>you build and test your app live and then dump it all to a text file
The Tcl shell is really great for doing this. Since everything can be represented as a string-type, you're able to introspect your defined procedures and dump them to disk.
I don't think I've used any IDE (other than Smalltalk) that was vaguely in the class of using Forth for interactivity and building of programs from the bottom up.
Whats up with mocking everyone you reply to on this thread?
THE central problem with pure stack machines and stack languages:
The programmer knows in their heart that moves, swaps, dupes, drops, etc. - any stack manipulation that doesn't involve a functional change to the data itself - is an inefficiency to be minimized, but this effort isn't in any way related to the problem at hand (writing a program to do something) so it's unwelcome mental overhead.
I like puzzles as much as the next person, but not so much when they seriously impede the solving of a bigger more serious puzzle, nor when the sub puzzle solving is an exercise in the minimization of something bad rather than the elimination of it.
Traditional languages are pushing and dropping data from the stack every time a function or procedure is called, they just give it another name: parameter passing. Making it explicit, at least Forth tries to reduce the need of copying things from the stack to local variables.
I don't think the stack in a traditional languages is the same as stacks in stack machines and languages. It's a lump of memory that gets allocated to a thread for stuff, and the allocation is indeed done in LIFO fashion, but I believe access to individual memory locations in the allocation is random. Name is the same, LIFO and all, but the mechanism granularity makes it quite different.
It is true that traditional languages lump local variables and return information in the same stack. But the difference is only in the way this is handled by the program. In traditional languages the programmer has no idea how parameters are passed in the stack and the compiler does everything. In Forth this is made explicit, but on the other hand there are no formal parameters to worry about (notice that Forth can use local variables if you want, it is just not the idiomatic way).
If you're programming in Forth and you find yourself doing lots of stack manipulation, you should at least consider using local variables. Once you accept that there will be times when variables make more sense, you'll be surprised at how infrequently you actually need them.
Edit: Be aware however, variables dramatically reduce composability (in any language actually, most just aren't very composable to begin with). By using local variables you essentially turn a procedure into a monolithic block
Both. And under the hood, Forth implements a virtual stack machine on top of a non-stack machine, which is inefficient.
I guess I'm just trying to counter all the mythos and happy talk surrounding Forth and stack processing in general. In the end Forth is a highly (too?) simplistic language, a product of its time, and nothing all that special or powerful. It does a lot of stuff poorly and the code tends to write-only. The syntax is so loose that the entire dictionary has to be searched to know you're dealing with a number. It's no mystery that we aren't all coding on it now.
I agree. Not trying to be racist, but I find that British authors are quite often unnecessarily verbose, so I'm somewhat gun shy when picking up their books.
One notable exception to this observation is Douglas Self, who writes thick, incredibly useful, highly detailed, and remarkably entertaining books on audio circuitry design. They read almost like good novels.
>One notable exception to this observation is Douglas Self, who writes thick, incredibly useful, highly detailed, and remarkably entertaining books on audio circuitry design. They read almost like good novels.
Maybe because sometimes you really need those highly detailed, long, and intricate arguments to fully understand all the nuances of the subject.
I'm probably showing my profound ignorance, but what has lambda calculus done for me lately? It seems to be an abstract formalism of some sort, but I've yet to see any real use of it, the examples are baby step type stuff with obfuscated syntax.
CS seems overburdened with jargon and trendy crap (IMO) so is this just the latest thing coming down the pike that everyone needs to give lip service to in order to seem smart to their peers?
>so is this just the latest thing coming down the pike that everyone needs to give lip service to in order to seem smart to their peers?
Lambda calculus was created by Alonzo Church in the 1930s. Alongside the work of Alan Turing, it forms a large part of the mathematical basis for modern computer science[0][1], and the implementation of many, if not most, modern programming languages.
> 80 year old concept that has spanned dozens of programming languages.
I don't even disagree with you, the "throwback to the roots" thing is not very useful most of the time, except for intellectual curiosity.
lambda calculus is still extremely useful for Prog lang theory, though, because that's the simplest thing you can imagine. If you want to prototype your fancy new type system or weird new execution model, you always design it on the lambda calculus first.
OMG, that's insane (in a bad way). Stacks exploding, hours of optimization, all for a simple factorial (toy problem). Exciting I suppose for a few souls in academia who spend their lives writing papers about theoretical stuff (not that there's anything wrong with that).
Oh dear, I have failed. The whole point of that paper was supposed to be that all this theory stuff actually has practical applications. The same techniques that worked on that toy problem can be used on real problems and produce similar results.
I did some follow-up work after that paper to implement binary arithmetic in the lambda calculus, and using that, of course, you can compute "real" factorials. Here it is computing the factorial of 100 in under a second:
? (time (pbn (bn_fact (bn 100))))
(PBN (BN_FACT (BN 100)))
took 942,269 microseconds (0.942269 seconds) to run.
64,791 microseconds (0.064791 seconds, 6.88%) of which was spent in GC.
During that period, and with 4 available CPU cores,
878,343 microseconds (0.878343 seconds) were spent in user mode
111,475 microseconds (0.111475 seconds) were spent in system mode
759,285,088 bytes of memory allocated.
504 minor page faults, 0 major page faults, 0 swaps.
93326215443944152681699238856266700490715968264381621468592963895217599993229915608941463976156518286253697920827223758251185210916864000000000000000000000000
This is he expanded code:
((λ f ((λ g (g g)) (λ (h x) ((f (h h)) x))))
(λ f
(λ n
(λ nil
(((λ (c _t e) (c _t e)) ((λ p (p (λ (l r) (λ (_t e) e)))) n)
(λ dummy ((λ (l r s) (s l r)) (λ (_t e) _t) (λ s (λ (_t e) _t))))
(λ dummy
(((λ f ((λ g (g g)) (λ (h x) ((f (h h)) x))))
(λ f
(λ (n1 n2 r)
(λ nil
(((λ (c _t e) (c _t e)) ((λ p (p (λ (l r) (λ (_t e) e)))) n1) (λ dummy r)
(λ dummy
(f
((λ (l)
(λ nil
(((λ (c _t e) (c _t e)) ((λ p (p (λ (l r) (λ (_t e) e)))) l)
(λ dummy (λ s (λ (_t e) _t))) (λ dummy ((λ p (p (λ (l r) r))) l)))
(λ x x))))
n1)
((λ (l r s) (s l r)) (λ (_t e) e) n2)
(((λ (l)
(λ nil
(((λ (c _t e) (c _t e)) ((λ p (p (λ (l r) (λ (_t e) e)))) l)
(λ dummy (λ (_t e) e)) (λ dummy ((λ p (p (λ (l r) l))) l)))
(λ x x))))
n1)
(((λ f ((λ g (g g)) (λ (h x) ((f (h h)) x))))
(λ f
(λ (n1 n2 c)
(λ nil
(((λ (c _t e) (c _t e))
((λ (p q) (p q (λ (_t e) e))) ((λ p (p (λ (l r) (λ (_t e) e)))) n1)
((λ p (p (λ (l r) (λ (_t e) e)))) n2))
(λ dummy
(c ((λ (l r s) (s l r)) (λ (_t e) _t) (λ s (λ (_t e) _t)))
(λ s (λ (_t e) _t))))
(λ dummy
((λ (l r s) (s l r))
((λ (b1 b2 c)
(b1 (b2 c ((λ c (c (λ (_t e) e) (λ (_t e) _t))) c))
(b2 ((λ c (c (λ (_t e) e) (λ (_t e) _t))) c) c)))
((λ (l)
(λ nil
(((λ (c _t e) (c _t e)) ((λ p (p (λ (l r) (λ (_t e) e)))) l)
(λ dummy (λ (_t e) e)) (λ dummy ((λ p (p (λ (l r) l))) l)))
(λ x x))))
n1)
((λ (l)
(λ nil
(((λ (c _t e) (c _t e)) ((λ p (p (λ (l r) (λ (_t e) e)))) l)
(λ dummy (λ (_t e) e)) (λ dummy ((λ p (p (λ (l r) l))) l)))
(λ x x))))
n2)
c)
(f
((λ (l)
(λ nil
(((λ (c _t e) (c _t e)) ((λ p (p (λ (l r) (λ (_t e) e)))) l)
(λ dummy (λ s (λ (_t e) _t)))
(λ dummy ((λ p (p (λ (l r) r))) l)))
(λ x x))))
n1)
((λ (l)
(λ nil
(((λ (c _t e) (c _t e)) ((λ p (p (λ (l r) (λ (_t e) e)))) l)
(λ dummy (λ s (λ (_t e) _t)))
(λ dummy ((λ p (p (λ (l r) r))) l)))
(λ x x))))
n2)
((λ (b1 b2 c) (b1 (b2 (λ (_t e) _t) c) (b2 c (λ (_t e) e))))
((λ (l)
(λ nil
(((λ (c _t e) (c _t e)) ((λ p (p (λ (l r) (λ (_t e) e)))) l)
(λ dummy (λ (_t e) e))
(λ dummy ((λ p (p (λ (l r) l))) l)))
(λ x x))))
n1)
((λ (l)
(λ nil
(((λ (c _t e) (c _t e)) ((λ p (p (λ (l r) (λ (_t e) e)))) l)
(λ dummy (λ (_t e) e))
(λ dummy ((λ p (p (λ (l r) l))) l)))
(λ x x))))
n2)
c)))))
(λ x x))))))
n2 r (λ (_t e) e))
r))))
(λ x x))))))
n
(f
(((λ f ((λ g (g g)) (λ (h x) ((f (h h)) x))))
(λ f
(λ n
(λ nil
(((λ (c _t e) (c _t e)) ((λ p (p (λ (l r) (λ (_t e) e)))) n) (λ dummy n)
(λ dummy
(λ nil
(((λ (c _t e) (c _t e)) ((λ p (p (λ (l r) l))) n)
(λ dummy
(((λ p (p (λ (l r) (λ (_t e) e)))) ((λ p (p (λ (l r) r))) n))
(λ s (λ (_t e) _t))
((λ (l r s) (s l r)) (λ (_t e) e) ((λ p (p (λ (l r) r))) n))))
(λ dummy ((λ (l r s) (s l r)) (λ (_t e) _t) (f ((λ p (p (λ (l r) r))) n)))))
(λ x x)))))
(λ x x))))))
n))
(λ s (λ (_t e) _t)))))
(λ x x))))))"
I didn't write that all by hand, of course. I built it up in pieces and my "compiler" assembled it for me. But start to finish that took be about an hour.
No, of course not. Factorial are just an illustrative example.
My goal was to dispel the myth that the lamdba calculus is nothing more than an intellectual curiosity. It is much, much more than that, not because you can use it to compute factorials, but because the techniques that you use to compute factorials in LC are generally applicable. If you really want to master the craft there are a few things you really need to know how to do: write your own compiler, your own editor, your own implementation of the Hindley-Milner type system. Writing an efficient factorial in LC is like that. It's not useful in and of itself, but for what you learn simply by going through the process.
Your initial words were "Here is how you can use the lambda calculus to compute factorials in non-geological time" which led me to believe you were claiming LC was a very (the most?) efficient route to factorial calculation. Which would make me sit up and take notice were that the case.
I'm sorry to say that your factorial example just seems painful and pointless to me, and doesn't encourage me to expand my mind via LC at all.
> LC was a very (the most?) efficient route to factorial calculation
It is, because it is a standard model for computation. To put it in layman terms, every programming language is lambda calculus in disguise. You have absolutely no idea of computability theory. That is the theory in which the P=NP conjecture is formulated, which has massive implications e.g. on cryptography.
You seem frustrated for not understanding at all what's going on here and you try to provoke answers. I know what I'm talking about, because I don't grok LC either and I get furious quickly when I'm hungry, tired or overworked. You are paranoid.
Hacker News, was written in a Lisp. Lisp was developed as a language for more closely describing programs in the manner Lambda Calculus describes algorithms in contrast to languages for describing programs more closely to the manner in which Turing Machines describe algorithms.
So as recently as now, it might be argued that the Lambda Calculus has helped waste time reading this here comment. Whether that's something it's done for anyone or undone is a matter of perspective.
I think there are two interpretations and @dewster probably wants to refer to only one of them:
- Knowledge that is implicitly contained in larger objects or concepts, but there is no need for users of those larger objects/concepts to know them. That is the vast majority of all knowledge and encompasses the things we don't even know we don't know yet. You can use a hammer on a nail without knowing anything about what is holding that hammer together or about Newton's laws.
- Explicit knowledge that you use directly. The comment is about this. Meaning, a response that lambda calculus is with us here and now implicitly falls into the first category above and is not what is meant, but why would you be better off knowing the concept directly apart from intellectual curiosity - which we can fulfill on only an insignificantly tiny fraction of the things we do.
Each time people want to add to the curriculum "you absolutely must know this!" they seem to be missing this distinction.
Given the incredibly amount of knowledge and how each and every piece of it is somehow important - but almost always only implicitly - I prefer a mindset that strives to reduce the amount I want other people to learn. Saying "no, you don't have to know/learn this" seems to be a better goal than trying to list ever more stuff. Also, the "need to know" should be much more targeted - "every programmer needs to know" is probably wrong for a lot of (most?) stuff that is labeled thus.
Thank you for expressing that better than I could ever do. Outside of JS sucking but being saved by extensibility, and academic interests, I've never heard a concrete reason why LC is important in a practical sense to most programmers. Can't swing a dead cat on the internet without hitting an LC article lately, and I really don't get the sudden interest and insistence that we all must learn it.
Looking into LC I was led to Curry's Paradox, the natural language version being "If this sentence is true, then Germany borders China" which (again, I'm probably just an idiot) doesn't strike me as paradox material.
This isn't some random reverse snobbery rant, I'm all for digging deep and really understanding things (I do this all the time, probably more than most engineers). But I can't see the Emperor's clothes everyone seems to be admiring, and it isn't for lack of looking.
"If this sentence is true, <false thing>" is a paradox like "this sentence is false". Implication is trivially true when the condition is false. But that would make the condition true and the sentence false.
I think it's clear that this is a paradox in the field of logic - but only there. Few people not trained and/or thinking of "logic" (that math thing) will find this sentence paradoxical, more like "useless/meaningless". Normal people use paradox more for things like the "French paradox" (the French are health despite eating "unhealthily"). Which isn't a paradox in any logical sense, and even on a human level I myself see the paradox more in the fact that some people see a paradox at all instead of just admitting that what they think is true about nutritional science just isn't (that French phenomenon can be shown to be true in statistics, it's not just imagination).
> Normal people use paradox more for things like the "French paradox"
Maybe. Certainly, common use (and traditional use, e.g. Zeno's Paradox) include things that are not strictly a logical paradox. But I think that most people (and certainly overwhelmingly most programmers) consider "this sentence is false" to be a paradox - which is part of why I picked it as a point of reference.
The best answer to this is the success of JavaScript. JavaScript historically has been a deeply flawed language. However, it was able to become one of the most widely used languages precisely because, despite all its warts, it implements the core foundations from the lambda calculus correctly (lambda functions, first class functions and closures). This allowed JS programmers enormous power to overcome these warts. Here are two examples:
JavaScript historically has no way to do namespacing. In most languages this would be a deal breaker. But because JavaScript has lambda functions, closures and first class functions a solution could be crafted from scratch! Immediately-Invoked Function Expressions (IIFE) were one of the most powerful early techniques to create scopes on the fly in JavaScript. Without these tools from the lambda calculus you would normally have to rely on language level changes to allow these problems to be fixed.
The other early challenge of JavaScript was the need for asynchronous callbacks. Often these require passing around data that you might not have access to when a function is written. Lambda functions allow programmers to quickly write ad hoc logic. First class functions allow programmers to pass around this logic. And most importantly closures allow you to create functions on the fly based on data that is not known until an asynchronous callback is applied. Again without the core ideas of the lambda calculus in place this type of power would require significant language-level design changes.
10 years ago JavaScript was a hideous language, with many major issues. But because in this mess was contained the power of the lambda calculus the language was able to be salvaged and extended to a wide range of uses.
> it was able to become one of the most widely used languages precisely because, despite all its warts, it implements the core foundations from the lambda calculus correctly
You think that's why Javascript is widely used? I think it's widely used because it was already useable within the browser and people just kept going with it.
As others have noted, LC is hardly trendy crap and you might say that 80% of CS is just trying to find more natural names for LC.
But what I'll say is that LC has done a lot for you lately. It, alongside Turing Machines, formed the basis for pretty much all of CS research and programming language design since the beginning of the 20th century. It has heavily influenced the design of probably every programming language you've ever used.
LC pretty much defines the notion of lexical scope. It pretty much defines the notion of constant and nearly defines the idea of variable as used in most languages. It gives a quick proof that recursion arises naturally even in very simple linguistic situations. In fact, it even gives the name of the very website you're reading right now.
LC is a huge force for how people are have been forming new ideas around Functional Programming over the last few decades and how they continue to now.
LC is hugely important. You don't have to learn it because it's already been pre-digested into everything you do in programming. But to not know it is to miss out on seeing the core of how your tools work.
Again, I'm most likely an idiot, but I'm just not seeing it in any of the examples I've encountered. Not trying to be dense, I realize the onus is mostly on me here, and I really am interested in fundamentals. Just wondering if there is a better example of why I should go out of my way to understand lambda calculus, cause this one ain't cutting it. LC may well indeed form the basis of everything CS - if so why doesn't it seem very impressive at first (and second, and third) sight?
> why doesn't it seem very impressive at first (and second, and third) sight?
That's a super interesting question. Pretty much the entire challenge of LC is coming to understand why it's interesting. From our perspective, so far past the consequences of this discovery, it will all seem too obvious to state. The interestingness of LC comes out of it being so austere and so powerful.
Frankly, this page's intro to it is sort of terrible. Clojure is too powerful and clunky to point directly at LC and show why it's interesting.
LC is a language with three constructors
var[x] reference a name
lam[x](...) create a scope where x is *known*
app(f, v) when f is lam[x], give
meaning to x within its body
Forget ideas of functions or anything else you know from CS. Just think about what these three constructors mean. To give them real meaning we have to define one more thing: the reduction step. It's what you think it is
app(lam[x](body), value) ==> body{x -> value}
where the right side means that we replace all instances of "x" within "body" with "value".
So imagine that to be the definition of a programming language---the whole thing. It seems a bit silly. It doesn't obviously have any of
- numbers
- if-the-else
- for loops
- recursion
- mutation
- data types
It doesn't even seem to have any way of talking about how it instructs a machine on what to do. It's truly austere.
So the magic, of course, is that it actually does have all of the above. Those 3 constructors and one rule are powerful enough already to generate all of that.
If you work in a big company making internal web apps for HR to manage payroll, then no, the lambda calculus will never be useful to you.
However, if you want to do research, it is an invaluable model of how computation works. It's simplicity allows for reasoning about complex ideas while minimizing the cognitive overhead of the model.
It may also help you understand why certain functional styles behave the way they do or how to better implement programming languages.
If none of those things interest you, then yes, it is not useful. But a large crowd here IS interested in computer science theory and programming language theory and functional programming so they are interested.
As a final analogy, you might consider this the 'Standard Model' of computation much like there is the Standard Model in particle physics. If you are a civil engineer, it will never be useful. If you are a chemist, it might be useful only in passing. But if you are the physicist, it is invaluable.
Altera Quartus has supported SV (SystemVerilog) quite well for some time now. I tend to work in 9.1sp2 as that was the last version of Quartus with integrated simulator (waaa!).
Xilinx requires you to use their newer tooling in order to write in SV. As these tools tend to be incredibly bloated, this is a disadvantage IMO (you want to use the earliest and therefore least bloaty tool that does the job).
I started out in VHDL and was forced by the other person on the "team" to learn verilog for a project. As with ANY language, you end up trying a bunch of stuff out to see how things are implemented, what breaks, etc. in order to get your footing. Over time I grew to appreciate the C like nature of verilog - its reduced verboseness over VHDL makes it an easier language to get actual things done in. And it makes switching between C and verilog pretty natural.
I resisted looking into SystemVerilog for forever as I was under the impression that it was more of a systems verification thing, but it is actually just plain old verilog with some incredibly useful extensions for synthesis. The package system is really, really nice. Look online for papers by Stuart Sutherland.
No language is perfect, and there are things I would change in SV (support for negative bit indices, less awkward sub vector select, etc.) but SV comes closest to the ideal HDL IMO. HW design via HDL is fascinating, and a strangely small space for the times we live in.
Using high level languages to do clock-based concurrent stuff is IMO insane as it just adds to the chain breakage, and few will be trained and able to easily use the source. Would you listen to anyone who proposed a VHDL to Haskell converter for writing everyday conventional software? HDLs are close to the metal for a reason.