This is not quite true. There is SPJ's famous "The implementation
of functional programming languages" [1]. Then there is lots of
material on various abstract machines, like the SECD machine, the
CAM, the many variants of the Krivine machine, the G-machine and
plenty of others whose name I forget.
All that said, I kind-of agree with you. It would be nice to have some
easy to grasp explanations of how to compile higher-order functions
to actual machines, x86, ARM, MIPS ... This material can be found
e.g. in Appel's book [2], but I don't think it's available
freely (and legally) on the web. I think this is in parts because
the compilation process is not so easy: you need to understand the compilation of normal languages, in particular stack layout, and then add more pointers. You also need to think about how to do memory management if a closure lives longer than its creating context. This is quite rich material, and not even usually covered in undergraduate compilers courses.
The situation with concurrency is much worse, but that only
reflects the fact that this is far from a solved problem on many
levels. To cite a recent paper [3]:
"Despite decades of research, we do not have [in 2015] a satisfactory concurrency semantics for any general-purpose programming language that aims to support concurrent systems code [...] Disturbingly, 40+ years after the first relaxed-memory hardware was introduced (the IBM 370/158MP), the field still does not have a credible proposal for the concurrency semantics of any general-purpose high-level language that includes high-performance shared-memory concurrency primitives. This is a major open problem for programming language semantics."
And memory models are but one issue to deal with in
concurrency.
All that said, I kind-of agree with you. It would be nice to have some easy to grasp explanations of how to compile higher-order functions to actual machines, x86, ARM, MIPS ... This material can be found e.g. in Appel's book [2], but I don't think it's available freely (and legally) on the web. I think this is in parts because the compilation process is not so easy: you need to understand the compilation of normal languages, in particular stack layout, and then add more pointers. You also need to think about how to do memory management if a closure lives longer than its creating context. This is quite rich material, and not even usually covered in undergraduate compilers courses.
The situation with concurrency is much worse, but that only reflects the fact that this is far from a solved problem on many levels. To cite a recent paper [3]: "Despite decades of research, we do not have [in 2015] a satisfactory concurrency semantics for any general-purpose programming language that aims to support concurrent systems code [...] Disturbingly, 40+ years after the first relaxed-memory hardware was introduced (the IBM 370/158MP), the field still does not have a credible proposal for the concurrency semantics of any general-purpose high-level language that includes high-performance shared-memory concurrency primitives. This is a major open problem for programming language semantics."
And memory models are but one issue to deal with in concurrency.
[1] http://research.microsoft.com/en-us/um/people/simonpj/papers...
[2] A. Appel, Modern Compiler Implementation in ...
[3] M. Batty et al, The Problem of Programming Language Concurrency Semantics.